source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
159,650
The random walk that is defined as $Y_{t} = Y_{t-1} + e_t$ , where $e_t$ is white noise. Denotes that the current position is the sum of the previous position + an unpredicted term. You can prove that the mean function $\mu_t = 0 $ , since $$E(Y_{t}) = E(e_1+ e_2+ ... +e_t) = E(e_1) + E(e_2) +... +E(e_t) = 0 + 0 + \cdots + 0$$ But why is it that the variance increases linearly with time? Does this have something to do with it's not “pure” random, since the new position is very correlated with the previous one? EDIT: Now I have a much better understanding by visualizing a big sample of random walks, and here we can easily observe that the overall variance does increase over time, and the mean is as expected around zero. Maybe this was trivial after all, since in the very early stages of the time series (compare time = 10 with time = 100) the random walkers have not had yet the time to explore as much.
In short because it keeps adding the variance of the next increments to the variability we have in getting to where we are now. $\text{Var}(Y_{t}) = \text{Var}(e_1+ e_2+ ... +e_t)$ $\qquad\quad\;\;= \text{Var}(e_1) + \text{Var}(e_2) +... +\text{Var}(e_t)$ (independence) $\qquad\quad\;\;= \sigma^2 + \sigma^2 + ... + \sigma^2=t\sigma^2\,,$ and we can see that $t\sigma^2$ increases linearly with $t$ . The mean is zero at each time point; if you simulated the series many times and averaged across series for a given time, that would average to something near 0 $\quad^{\text{Figure: 500 simulated random walks with sample mean in white and }}$ $\quad^{ \pm \text{ one standard deviation in red. Standard deviation increases with } \sqrt{t}\,.}$
{ "source": [ "https://stats.stackexchange.com/questions/159650", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/76937/" ] }
159,657
I'm interested in looking at several different metrics for ranking algorithms - there are a few listed on the Learning to Rank wikipedia page, including: • Mean average precision (MAP); • DCG and NDCG; • Precision@n, NDCG@n, where "@n" denotes that the metrics are evaluated only on top n documents; • Mean reciprocal rank; • Kendall's tau • Spearman's Rho • Expected reciprocal rank • Yandex's pfound but it isn't clear to me what are the advantages/disadvantages of each or when you may choose one over another (or what it would mean if one algorithm outperformed another on NDGC but was worse when evaluated with MAP). Is there anywhere I can go to learn more about these questions?
I am actually looking for the same answer, however I should be able to at least partially answer your question. All of the metrics that you have mentioned have different traits and, unfortunately, the one you should pick depends on what you actually would like to measure. Here are some things that it would be worth to have in mind: Spearman's rho metric penalises errors at the top of the list with the same weight as mismatches on the bottom, so in most cases this is not the metric to use for evaluating rankings DCG & NDCG are one of the few metrics that take into account the non-binary utility function, so you can describe how useful is a record and not whether it's useful. DCG & NDCG have fixed weighs for positions, so a document in a given position has always the same gain and discount independently of the documents shown above it You usually would prefer NDCG over DCG , because it normalises the value by the number of relevant documents MAP is supposed to be a classic and a 'go-to' metric for this problem and it seems to be a standard in the field. (N)DCG should be always computed for a fixed amount of records (@k), because it has a long tail (lots of irrelevant records at the end of the ranking highly bias the metric). This doesn't apply to MAP . Mean Reciprocal Rank only marks the position of the first relevant document, so if you care about as many relevant docs as possible to be high on the list, then this should not be your choice Kendall's tau only handles binary utility function, it also should be computed @k (similar to NDCG ) Valuable resources: Victor Lavrenko lecture on YouTube - it's only a link to the MAP vs NDCG episode, but the whole lecture includes much more (including Kendall's Tau). You should definitely check it out, great lecture! ERR paper Can't post more links, because of the fresh account :) If anybody has some more remarks or ideas, I would be happy to hear them as well!
{ "source": [ "https://stats.stackexchange.com/questions/159657", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/42494/" ] }
160,179
I was trying to learn machine learning using the Coursera material . In this lecture, Andrew Ng uses gradient descent algorithm to find the coefficients of the linear regression model that will minimize the error function (cost function). For linear regression, do we need gradient descent? It seems I can analytically differentiate the error function and set it to zero to solve for the coefficients; is that right?
Linear Least squares can be solved by 0) Using high quality linear least squares solver, based on either SVD or QR, as described below, for unconstrained linear least squares, or based on a version of Quadratic Programming or Conic Optimization for bound or linearly constrained least squares, as described below. Such a solver is pre-canned, heavily tested, and ready to go - use it. 1) SVD, which is the most reliable and numerically accurate method, but also takes more computing than alternatives. In MATLAB, the SVD solution of the unconstrained linear least squares problem A*X = b is pinv(A) * b, which is very accurate and reliable. 2) QR, which is fairly reliable and numerically accurate, but not as much as SVD, and is faster than SVD. In MATLAB, the QR solution of the unconstrained linear least squares problem A*X = b is A\b, which is fairly accurate and reliable, except when A is ill-conditioned, i.e., has large condition number. A\b is faster to compute than pinv(A) * b, but not as reliable or accurate. 3) Forming the Normal equations (TERRIBLE from reliability and numerical accuracy standpoint, because it squares the condition number, which is a very bad thing to do) and 3a) solving by Cholesky Factorization (not good) 3b) explicitly inverting matrix (HORRIBLE) 4) Solving as a Quadratic Programming problem or Second Order Cone problem 4a) Solve using high quality Quadratic Programming software. This is reliable and numerically accurate, but takes longer than SVD or QR. However, it is easy to add bound or general linear constraints, or linear or quadratic (two norm) penalty or regularization terms to the objective function, and still solve the problem using Quadratic Programming software. 4b) Solve as a Second Order Cone problem using high quality Conic Optimization software. Remarks are the same as for Quadratic Programming software, but you can also add bound or general linear constraints and other conic constraints or objective function terms, such as penalty or regularization terms in various norms. 5) Solve using high quality general purpose nonlinear optimization software. This may still work well, but will in general be slower than Quadratic Programming or Conic Optimization software, and maybe not quite as reliable. However, it may be possible to include not only bound and general linear constraints, but also nonlinear constraints into the least squares optimization. Also, can be used for nonlinear least squares, and if other nonlinear terms are added to the objective function. 6) Solve using lousy general purpose nonlinear optimization algorithms --> DON'T EVER DO THIS. 7) Solve using THE WORST POSSIBLE general purpose nonlinear optimization algorithm there is, i.e., gradient descent. Use this only if you want to see how bad and unreliable a solution method can be If someone tells you to use gradient descent to solve linear least squares problems 7 i) Learn about statistical computing from someone who knows something about it 7 ii) Learn optimization from someone who knows something about it.
{ "source": [ "https://stats.stackexchange.com/questions/160179", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/61158/" ] }
160,230
I understand the proof that $$Var(aX+bY) = a^2Var(X) +b^2Var(Y) + 2abCov(X,Y), $$ but I don't understand how to prove the generalization to arbitrary linear combinations. Let $a_i$ be scalars for $i\in {1,\dots ,n}$ so we have a vector $\underline a$ , and $\underline X = X_i,\dots ,X_n$ be a vector of correlated random variables. Then $$ Var(a_1X_1 + \dots +a_nX_n) = \sum_{i=1}^n a_i^2 \sigma_i^2 + 2 \sum_{i=1}^n \sum_{j>i}^n a_i a_j \text{ Cov}(X_i,X_j)$$ How do we prove this? I imagine there are proofs in the summation notation and in vector notation.
This is just an exercise in applying basic properties of sums, the linearity of expectation, and definitions of variance and covariance \begin{align} \operatorname{var}\left(\sum_{i=1}^n a_i X_i\right) &= E\left[\left(\sum_{i=1}^n a_i X_i\right)^2\right] - \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2 &\scriptstyle{\text{one definition of variance}}\\ &= E\left[\sum_{i=1}^n\sum_{j=1}^n a_i a_j X_iX_j\right] - \left(E\left[\sum_{i=1}^n a_i X_i\right]\right)^2 &\scriptstyle{\text{basic properties of sums}}\\ &= \sum_{i=1}^n\sum_{j=1}^n a_i a_j E[X_iX_j] - \left(\sum_{i=1}^n a_i E[X_i]\right)^2 &\scriptstyle{\text{linearity of expectation}}\\ &= \sum_{i=1}^n\sum_{j=1}^n a_i a_j E[X_iX_j] - \sum_{i=1}^n \sum_{j=1}^n a_ia_j E[X_i]E[X_j] &\scriptstyle{\text{basic properties of sums}}\\ &= \sum_{i=1}^n\sum_{j=1}^n a_i a_j \left(E[X_iX_j] - E[X_i]E[X_j]\right)&\scriptstyle{\text{combine the sums}}\\ &= \sum_{i=1}^n\sum_{j=1}^n a_i a_j\operatorname{cov}(X_i,X_j) &\scriptstyle{\text{apply a definition of covariance}}\\ &= \sum_{i=1}^n a_i^2\operatorname{var}(X_i) + 2\sum_{i=1}^n \sum_{j\colon j > i}^n a_ia_j\operatorname{cov}(X_i,X_j) &\scriptstyle{\text{re-arrange sum}}\\ \end{align} Note that in that last step, we have also identified $\operatorname{cov}(X_i,X_i)$ as the variance $\operatorname{var}(X_i)$.
{ "source": [ "https://stats.stackexchange.com/questions/160230", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81598/" ] }
160,354
I am hoping to get an intuitive, accessible explanation of quantile regression. Let's say I have a simple dataset of outcome $Y$, and predictors $X_1, X_2$. If, for example, I run a quantile regression at .25,.5,.75, and get back $\beta_{0,.25},\beta_{1,.25}...\beta_{2,.75}$. Are the $\beta$ values found by simply ordering the $y$ values, and performing a linear regression based on the examples which are at/near the given quantile? Or do all of the samples contribute to the $\beta$ estimates, with descending weights as the distance from the quantile increases? Or is it something totally different? I've yet to find an accessible explanation.
I recommend Koenker & Hallock (2001, Journal of Economic Perspectives) and Koenker's textbook Quantile Regression . The starting point is the observation that the median of a data set minimizes the sum of absolute errors . That is, the 50% quantile is a solution to a particular optimization problem (to find the value that minimizes the sum of absolute errors). From this, it is easy to find that any $\tau$ -quantile is the solution to a specific minimization problem, namely to minimize a sum of asymmetrically weighted absolute errors, with weights that depend on $\tau$ . Finally, to make the step to regression, we model the solution to this minimization problem as a linear combination of predictor variables, so now the problem is one of finding not a single value, but a set of regression parameters. So your intuition is quite correct: all of the samples contribute to the $\beta$ estimates, with asymmetric weights depending on the quantile $\tau$ we aim for.
{ "source": [ "https://stats.stackexchange.com/questions/160354", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/28729/" ] }
160,359
Following a comment from a previous thread , I want to know how one can test for the assumption of common trend between the treatment and control group in the Difference in Difference method? Can I test that assumption with data of two time points (for example, baseline survey in 2002, treatment happens from 2002 to 2006 and follow-up survey in 2006)? Thank you very much! Edited: After posting this question, the "related" panel leads me to this unanswered question , in which the asker wanted to understand intuitions behind a method to account for time trends in DID method. I want to link it here as that question is very interesting to me too. Thank you!
The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatment. Ideally the pre-treatment trends should look something like this: This graph was taken from a previous answer to the question why we need the common trends assumption. This includes also an explanation of the blue-dashed line which is the counterfactual outcome for the treated that can be assumed if we can reasonably verify the parallel trends assumption. A formal test which is also suitable for multivalued treatments or several groups is to interact the treatment variable with time dummies. Suppose you have 3 pre-treatment periods and 3 post-treatment periods, you would then regress $$y_{it} = \lambda_i + \delta_t + \beta_{-2}D_{it} + \beta_{-1}D_{it} + \beta_1 D_{it} + \beta_2 D_{it} + \beta_3 D_{it} + \epsilon_{it}$$ where $y$ is the outcome for individual $i$ at time $t$, $\lambda$ and $\delta$ are individual and time fixed effects (this is a generalized way of writing down the diff-in-diff model which also allows for multiple treatments or treatments at different times). The idea is the following. You include the interactions of the time dummies and the treatment indicator for the first two pre-treatment periods and you leave out the one interaction for the last pre-treatment period due to the dummy variable trap. Also now all the other interactions are expressed relative to the omitted period which serves as the baseline. If the outcome trends between treatment and control group are the same, then $\beta_{-2}$ and $\beta_{-1}$ should be insignificant, i.e. the difference in differences is not significantly different between the two groups in the pre-treatment period. An attractive feature of this test is that also the interactions of the time dummies after the treatment with the treatment indicator is informative. For instance, $\beta_{1}, \beta_2, \beta_3$ show you whether the treatment effect fades out over time, stays constant, or even increases. An application of this approach is Autor (2003) . Note that the literature generally refers to $\beta_{-2}, \beta_{-1}$ as "leads" and $\beta_{1}, \beta_2, \beta_3$ as "lags", even though they are merely interactions of the treatment indicator with time dummies and are not actually leads and lags of the treatment indicator in a time-series jargon sense. A more detailed explanation of this parallel trends test is provided in the lecture notes by Steve Pischke ( here on page 7, or here on page 9).
{ "source": [ "https://stats.stackexchange.com/questions/160359", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79710/" ] }
160,362
I need to minimize a complicated loss function, $f\left(\Lambda\right)$ over a set of symmetric matrices, $S_{p}$ of dimension p, such that all the eigenvalues of $\Lambda \in \left[0,1\right]$. I want to set this up as univariate optimization problem. Using spectral decomposition, I can write $\Lambda=Q \Sigma Q^\top$. Where $\Sigma$ is p-dimensional diagonal matrix with eigenvalues along the diagonal. I have freedom to choose the orthogonal matrix Q. Are there any strategies/characterizations to select Q. What kind of restrictions can I impose on Q? Is there a better way to solve this problem? I also know that, $$\lim_{k \to \infty }\left(\Lambda-I_p\right)^k = 0$$
The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatment. Ideally the pre-treatment trends should look something like this: This graph was taken from a previous answer to the question why we need the common trends assumption. This includes also an explanation of the blue-dashed line which is the counterfactual outcome for the treated that can be assumed if we can reasonably verify the parallel trends assumption. A formal test which is also suitable for multivalued treatments or several groups is to interact the treatment variable with time dummies. Suppose you have 3 pre-treatment periods and 3 post-treatment periods, you would then regress $$y_{it} = \lambda_i + \delta_t + \beta_{-2}D_{it} + \beta_{-1}D_{it} + \beta_1 D_{it} + \beta_2 D_{it} + \beta_3 D_{it} + \epsilon_{it}$$ where $y$ is the outcome for individual $i$ at time $t$, $\lambda$ and $\delta$ are individual and time fixed effects (this is a generalized way of writing down the diff-in-diff model which also allows for multiple treatments or treatments at different times). The idea is the following. You include the interactions of the time dummies and the treatment indicator for the first two pre-treatment periods and you leave out the one interaction for the last pre-treatment period due to the dummy variable trap. Also now all the other interactions are expressed relative to the omitted period which serves as the baseline. If the outcome trends between treatment and control group are the same, then $\beta_{-2}$ and $\beta_{-1}$ should be insignificant, i.e. the difference in differences is not significantly different between the two groups in the pre-treatment period. An attractive feature of this test is that also the interactions of the time dummies after the treatment with the treatment indicator is informative. For instance, $\beta_{1}, \beta_2, \beta_3$ show you whether the treatment effect fades out over time, stays constant, or even increases. An application of this approach is Autor (2003) . Note that the literature generally refers to $\beta_{-2}, \beta_{-1}$ as "leads" and $\beta_{1}, \beta_2, \beta_3$ as "lags", even though they are merely interactions of the treatment indicator with time dummies and are not actually leads and lags of the treatment indicator in a time-series jargon sense. A more detailed explanation of this parallel trends test is provided in the lecture notes by Steve Pischke ( here on page 7, or here on page 9).
{ "source": [ "https://stats.stackexchange.com/questions/160362", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36263/" ] }
160,479
I'm currently going through Bengio's and Bergstra's Random Search for Hyper-Parameter Optimization [1] where the authors claim random search is more efficient than grid search in achieving approximately equal performance. My question is: Do people here agree with that claim? In my work I've been using grid search mostly because of the lack of tools available to perform random search easily. What is the experience of people using grid vs. random search?
Random search has a probability of 95% of finding a combination of parameters within the 5% optima with only 60 iterations. Also compared to other methods it doesn't bog down in local optima. Check this great blog post at Dato by Alice Zheng, specifically the section Hyperparameter tuning algorithms . I love movies where the underdog wins, and I love machine learning papers where simple solutions are shown to be surprisingly effective. This is the storyline of “Random search for hyperparameter optimization” by Bergstra and Bengio. [...] Random search wasn’t taken very seriously before. This is because it doesn’t search over all the grid points, so it cannot possibly beat the optimum found by grid search. But then came along Bergstra and Bengio. They showed that, in surprisingly many instances, random search performs about as well as grid search. All in all, trying 60 random points sampled from the grid seems to be good enough. In hindsight, there is a simple probabilistic explanation for the result: for any distribution over a sample space with a finite maximum, the maximum of 60 random observations lies within the top 5% of the true maximum, with 95% probability. That may sound complicated, but it’s not. Imagine the 5% interval around the true maximum. Now imagine that we sample points from his space and see if any of it lands within that maximum. Each random draw has a 5% chance of landing in that interval, if we draw n points independently, then the probability that all of them miss the desired interval is $\left(1−0.05\right)^{n}$ . So the probability that at least one of them succeeds in hitting the interval is 1 minus that quantity. We want at least a .95 probability of success. To figure out the number of draws we need, just solve for n in the equation: $$1−\left(1−0.05\right)^{n}>0.95$$ We get $n\geqslant60$ . Ta-da! The moral of the story is: if the close-to-optimal region of hyperparameters occupies at least 5% of the grid surface, then random search with 60 trials will find that region with high probability. You can improve that chance with a higher number of trials. All in all, if you have too many parameters to tune, grid search may become unfeasible. That's when I try random search.
{ "source": [ "https://stats.stackexchange.com/questions/160479", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/16052/" ] }
160,552
I have some basic data on emission reductions and cost per car: q24 <- read.table(text = "reductions cost.per.car 50 45 55 55 60 62 65 70 70 80 75 90 80 100 85 200 90 375 95 600 ",header = TRUE, sep = "") I know that this is an exponential function, so I expect to be able to find a model that fits with: model <- nls(cost.per.car ~ a * exp(b * reductions) + c, data = q24, start = list(a=1, b=1, c=0)) but I'm getting an error: Error in nlsModel(formula, mf, start, wts) : singular gradient matrix at initial parameter estimates I've read through a ton of questions on the error I'm seeing and I'm gathering that the problem is probably that I need better/different start values (the initial parameter estimates makes a little more sense) but I'm not sure, given the data that I have, how I'd go about estimating better parameters.
Automatically finding good starting values for a nonlinear model is an art. (It's relatively easy for one-off datasets when you can just plot the data and make some good guesses visually.) One approach is to linearize the model and use least squares estimates. In this case, the model has the form $$\mathbb{E}(Y) = a \exp(b x) + c$$ for unknown parameters $a,b,c$. The presence of the exponential encourages us to use logarithms--but the addition of $c$ makes it difficult to do that. Notice, though, that if $a$ is positive then $c$ will be less than the smallest expected value of $Y$--and therefore might be a little less than the smallest observed value of $Y$. (If $a$ could be negative you will also have to consider a value of $c$ that is a little greater than the largest observed value of $Y$.) Let us, then, take care of $c$ by using as initial estimate $c_0$ something like half the minimum of the observations $y_i$. The model can now be rewritten without that thorny additive term as $$\mathbb{E}(Y) - c_0 \approx a \exp(b x).$$ That we can take the log of: $$\log(\mathbb{E}(Y) - c_0) \approx \log(a) + b x.$$ That is a linear approximation to the model. Both $\log(a)$ and $b$ can be estimated with least squares. Here is the revised code: c.0 <- min(q24$cost.per.car) * 0.5 model.0 <- lm(log(cost.per.car - c.0) ~ reductions, data=q24) start <- list(a=exp(coef(model.0)[1]), b=coef(model.0)[2], c=c.0) model <- nls(cost.per.car ~ a * exp(b * reductions) + c, data = q24, start = start) Its output (for the example data) is Nonlinear regression model model: cost.per.car ~ a * exp(b * reductions) + c data: q24 a b c 0.003289 0.126805 48.487386 residual sum-of-squares: 2243 Number of iterations to convergence: 38 Achieved convergence tolerance: 1.374e-06 The convergence looks good. Let's plot it: plot(q24) p <- coef(model) curve(p["a"] * exp(p["b"] * x) + p["c"], lwd=2, col="Red", add=TRUE) It worked well! When automating this, you might perform some quick analyses of the residuals, such as comparing their extremes to the spread in the ($y$) data. You might also need analogous code to deal with the possibility $a\lt 0$; I leave that as an exercise. Another method to estimate initial values relies on understanding what they mean, which can be based on experience, physical theory, etc. An extended example of a (moderately difficult) nonlinear fit whose initial values can be determined in this way is described in my answer at https://stats.stackexchange.com/a/15769 . Visual analysis of a scatterplot (to determine initial parameter estimates) is described and illustrated at https://stats.stackexchange.com/a/32832 . In some circumstances, a sequence of nonlinear fits is made where you can expect the solutions to change slowly. In that case it's often convenient (and fast) to use the previous solutions as initial estimates for the next ones . I recall using this technique (without comment) at https://stats.stackexchange.com/a/63169 .
{ "source": [ "https://stats.stackexchange.com/questions/160552", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81807/" ] }
161,069
What is the difference between LOESS (locally estimated scatterplot smoothing) and LOWESS (locally weighted scatterplot smoothing)? From Wikipedia I can only see that LOESS is a generalization of LOWESS. Do they have slightly different parameters?
I think it is important to distinguish between methods and their implementations in software. The main difference with respect to the first is that lowess allows only one predictor, whereas loess can be used to smooth multivariate data into a kind of surface. It also gives you confidence intervals. In these senses, loess is a generalization. Both smooth by using tricube weighting around each point, and loess also adds an optional robustification option that re-weights residuals using biweight weighting. Now for the implementation. In some software, lowess uses a linear polynomial, while loess uses a quadratic polynomial (though you can alter that). The defaults and shortcuts that the algorithms use are often quite different, so that it is hard to get the univariate outputs to match exactly. On the other hand, I am not aware of a case where the choice between the two made a substantive difference.
{ "source": [ "https://stats.stackexchange.com/questions/161069", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/29025/" ] }
161,635
I read here that given a sample $ X_1,X_2,...,X_n $ from a continuous distribution with cdf $ F_X $, the sample corresponding to $ U_i = F_X(X_i) $ follows a standard uniform distribution. I have verified this using qualitative simulations in Python, and I was easily able to verify the relationship. import matplotlib.pyplot as plt import scipy.stats xs = scipy.stats.norm.rvs(5, 2, 10000) fig, axes = plt.subplots(1, 2, figsize=(9, 3)) axes[0].hist(xs, bins=50) axes[0].set_title("Samples") axes[1].hist( scipy.stats.norm.cdf(xs, 5, 2), bins=50 ) axes[1].set_title("CDF(samples)") Resulting in the following plot: I am unable to grasp why this happens. I assume it has to do with the definition of the CDF and it's relationship to the PDF, but I am missing something... I would appreciate it if someone could point me to some reading on the subject or help me get some intuition on the subject. EDIT: The CDF looks like this:
Assume $F_X$ is continuous and increasing. Define $Z = F_X(X)$ and note that $Z$ takes values in $[0, 1]$ . Then $$F_Z(x) = P(F_X(X) \leq x) = P(X \leq F_X^{-1}(x)) = F_X(F_X^{-1}(x)) = x.$$ On the other hand, if $U$ is a uniform random variable that takes values in $[0, 1]$ , $$F_U(x) = \int_R f_U(u)\,du =\int_0^x \,du =x.$$ Thus $F_Z(x) = F_U(x)$ for every $x\in[0, 1]$ . Since $Z$ and $U$ has the same distribution function $Z$ must also be uniform on $[0, 1]$ .
{ "source": [ "https://stats.stackexchange.com/questions/161635", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/82403/" ] }
162,143
I'm very new to statistics, and I'm just learning to understand the basics, including $p$-values. But there is a huge question mark in my mind right now, and I kind of hope my understanding is wrong. Here's my thought process: Aren't all researches around the world somewhat like the monkeys in the "infinite monkey theorem"? Consider that there are 23887 universities in the world. If each university has 1000 students, that's 23 million students each year. Let's say that each year, each student does at least one piece of research, using hypothesis testing with $\alpha=0.05$. Doesn't that mean that even if all the research samples were pulled from a random population, about 5% of them would "reject the null hypothesis as invalid". Wow. Think about that. That's about a million research papers per year getting published due to "significant" results. If this is how it works, this is scary. It means that a lot of the "scientific truth" we take for granted is based on pure randomness. A simple chunk of R code seems to support my understanding: library(data.table) dt <- data.table(p=sapply(1:100000,function(x) t.test(rnorm(10,0,1))$p.value)) dt[p<0.05,] So does this article on successful $p$-fishing: I Fooled Millions Into Thinking Chocolate Helps Weight Loss. Here's How . Is this really all there is to it? Is this how "science" is supposed to work?
This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05. That's what a p value means. However, the null is essentially never strictly true. But even if we loosen it to "almost true" or "about right" or some such, that would mean that the 1,000,000 studies would all have to be about things like The relationship between social security number and IQ Is the length of your toes related to the state of your birth? and so on. Nonsense. One trouble is, of course, that we don't know which nulls are true. Another problem is the one @Glen_b mentioned in his comment - the file drawer problem. This is why I so much like Robert Abelson's ideas that he puts forth in Statistics as Principled Argument . That is, statistical evidence should be part of a principled argument as to why something is the case and should be judged on the MAGIC criteria: Magnitude: How big is the effect? Articulation: Is it full of "ifs", "ands" and "buts" (that's bad) Generality: How widely does it apply? Interestingness Credibilty: Incredible claims require a lot of evidence
{ "source": [ "https://stats.stackexchange.com/questions/162143", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/82697/" ] }
162,159
Suppose $X\sim N(\mu, \sigma^{2})$ with some small $\sigma^{2}$ and largish $\mu$. Now $X$ will be rarely negative. Suppose I need random variables that are strictly positive but otherwise normal-like. Have this kind of variables been studied before? It is not something that I find from my statistics books..
This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05. That's what a p value means. However, the null is essentially never strictly true. But even if we loosen it to "almost true" or "about right" or some such, that would mean that the 1,000,000 studies would all have to be about things like The relationship between social security number and IQ Is the length of your toes related to the state of your birth? and so on. Nonsense. One trouble is, of course, that we don't know which nulls are true. Another problem is the one @Glen_b mentioned in his comment - the file drawer problem. This is why I so much like Robert Abelson's ideas that he puts forth in Statistics as Principled Argument . That is, statistical evidence should be part of a principled argument as to why something is the case and should be judged on the MAGIC criteria: Magnitude: How big is the effect? Articulation: Is it full of "ifs", "ands" and "buts" (that's bad) Generality: How widely does it apply? Interestingness Credibilty: Incredible claims require a lot of evidence
{ "source": [ "https://stats.stackexchange.com/questions/162159", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/82705/" ] }
162,174
I have performed a Poisson Regression in R, but I got strange results that I cannot find an answer for. My data is like this: Aspect_16 Nr_Pereti 1 E 49 2 ENE 73 3 ESE 29 4 N 84 5 NE 77 6 NNE 99 7 NNW 77 8 NW 92 9 S 19 10 SE 20 11 SSE 9 12 SSW 17 13 SW 23 14 W 39 15 WNW 56 16 WSW 25 The Nr_Pereti variable are counts for each level in the 'Aspect_16' column. The model formula and results are: summary(model_nr_exp) Call: glm(formula = tab_gen_exp$Nr_Pereti ~ tab_gen_exp$Aspect_16, family = poisson) Deviance Residuals: [1] 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.8918 0.1429 27.243 < 2e-16 *** tab_gen_exp$Aspect_16ENE 0.3986 0.1847 2.159 0.030886 * tab_gen_exp$Aspect_16ESE -0.5245 0.2343 -2.239 0.025169 * tab_gen_exp$Aspect_16N 0.5390 0.1798 2.998 0.002714 ** tab_gen_exp$Aspect_16NE 0.4520 0.1827 2.473 0.013386 * tab_gen_exp$Aspect_16NNE 0.7033 0.1747 4.026 5.66e-05 *** tab_gen_exp$Aspect_16NNW 0.4520 0.1827 2.473 0.013386 * tab_gen_exp$Aspect_16NW 0.6300 0.1769 3.562 0.000368 *** tab_gen_exp$Aspect_16S -0.9474 0.2703 -3.505 0.000456 *** tab_gen_exp$Aspect_16SE -0.8961 0.2653 -3.377 0.000733 *** tab_gen_exp$Aspect_16SSE -1.6946 0.3627 -4.673 2.97e-06 *** tab_gen_exp$Aspect_16SSW -1.0586 0.2815 -3.761 0.000169 *** tab_gen_exp$Aspect_16SW -0.7563 0.2528 -2.992 0.002769 ** tab_gen_exp$Aspect_16W -0.2283 0.2146 -1.064 0.287468 tab_gen_exp$Aspect_16WNW 0.1335 0.1956 0.683 0.494845 tab_gen_exp$Aspect_16WSW -0.6729 0.2458 -2.738 0.006182 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 2.9506e+02 on 15 degrees of freedom Residual deviance: 2.1316e-14 on 0 degrees of freedom AIC: 120.29 Number of Fisher Scoring iterations: 3 I don't have any Deviance Residuals for the model, and when I try to plot the model, it gives me this error: Error in qqnorm.default(rs, main = main, ylab = ylab23, ylim = ylim, ...) : y is empty or has only NAs In addition: Warning messages: 1: not plotting observations with leverage one: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 2: In min(x) : no non-missing arguments to min; returning Inf 3: In max(x) : no non-missing arguments to max; returning -Inf What did I do wrong here? Thanks.
This is certainly a valid concern, but this isn't quite right. If 1,000,000 studies are done and all the null hypotheses are true then approximately 50,000 will have significant results at p < 0.05. That's what a p value means. However, the null is essentially never strictly true. But even if we loosen it to "almost true" or "about right" or some such, that would mean that the 1,000,000 studies would all have to be about things like The relationship between social security number and IQ Is the length of your toes related to the state of your birth? and so on. Nonsense. One trouble is, of course, that we don't know which nulls are true. Another problem is the one @Glen_b mentioned in his comment - the file drawer problem. This is why I so much like Robert Abelson's ideas that he puts forth in Statistics as Principled Argument . That is, statistical evidence should be part of a principled argument as to why something is the case and should be judged on the MAGIC criteria: Magnitude: How big is the effect? Articulation: Is it full of "ifs", "ands" and "buts" (that's bad) Generality: How widely does it apply? Interestingness Credibilty: Incredible claims require a lot of evidence
{ "source": [ "https://stats.stackexchange.com/questions/162174", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/77014/" ] }
162,560
Let a stick of length 1 be broken in $k+1$ fragments uniformly at random. What is the distribution of the length of the longest fragment? More formally, let $(U_1, \ldots U_k)$ be IID $U(0,1)$, and let $(U_{(1)}, \ldots, U_{(k)})$ be the associated order statistics, i.e. we simply order the sample in such a way that $U_{(1)} \leq U_{(2)} \leq, \ldots , \leq U_{(k)}$. Let $Z_k = \max \left(U_{(1)}, U_{(2)}-U_{(1)}, \ldots, U_{(k)} - U_{(k-1)}, 1-U_{(k)}\right)$. I am interested in the distribution of $Z_k$. Moments, asymptotic results, or approximations for $k \uparrow \infty$ are also interesting.
With the information given by @Glen_b I could find the answer. Using the same notations as the question $$ P(Z_k \leq x) = \sum_{j=0}^{k+1} { k+1 \choose j } (-1)^j (1-jx)_+^k, $$ where $a_+ = a$ if $a > 0$ and $0$ otherwise. I also give the expectation and the asymptotic convergence to the Gumbel ( NB : not Beta) distribution $$ E(Z_k)= \frac{1}{k+1}\sum_{i=1}^{k+1}\frac{1}{i} \sim \frac{\log(k+1)}{k+1}, \\ P(Z_k \leq x) \sim \exp\left(- e^{-(k+1)x + \log(k+1)} \right). $$ The material of the proofs is taken from several publications linked in the references. They are somewhat lengthy, but straightforward. 1. Proof of the exact distribution Let $(U_1, \ldots, U_k)$ be IID uniform random variables in the interval $(0,1)$. By ordering them, we obtain the $k$ order statistics denoted $(U_{(1)}, \ldots, U_{(k)})$. The uniform spacings are defined as $\Delta_i = U_{(i)} - U_{(i-1)}$, with $U_{(0)} = 0$ and $U_{(k+1)} = 1$. The ordered spacings are the corresponding ordered statistics $\Delta_{(1)} \leq \ldots \leq \Delta_{(k+1)}$. The variable of interest is $\Delta_{(k+1)}$. For fixed $x \in (0,1)$, we define the indicator variable $\mathbb{1}_i = \mathbb{1}_{\{\Delta_i > x\}}$. By symmetry, the random vector $(\mathbb{1}_1, \ldots, \mathbb{1}_{k+1})$ is exchangeable, so the joint distribution of a subset of size $j$ is the same as the joint distribution of the first $j$. By expanding the product, we thus obtain $$ P(\Delta_{(k+1)} \leq x) = E \left( \prod_{i=1}^{k+1} (1 - \mathbb{1}_i) \right) = 1 + \sum_{j=1}^{k+1} { k+1 \choose j } (-1)^j E \left( \prod_{i=1}^j \mathbb{1}_i \right). $$ We will now prove that $E \left( \prod_{i=1}^j \mathbb{1}_i \right) = (1-jx)_+^k$, which will establish the distribution given above. We prove this for $j=2$, as the general case is proved similarly. $$ E \left( \prod_{i=1}^2 \mathbb{1}_i \right) = P(\Delta_1 > x \cap \Delta_2 > x) = P(\Delta_1 > x) P(\Delta_2 > x | \Delta_1 > x). $$ If $\Delta_1 > x$, the $k$ breakpoints are in the interval $(x,1)$. Conditionally on this event, the breakpoints are still exchangeable, so the probability that the distance between the second and the first breakpoint is greater than $x$ is the same as the probability that the distance between the first breakpoint and the left barrier (at position $x$) is greater than $x$. So $$ P(\Delta_2 > x | \Delta_1 > x) = P\big(\text{all points are in } (2x,1) \big| \text{all points are in } (x,1)\big), \; \text{so} \\ P(\Delta_2 > x \cap \Delta_1 > x) = P\big(\text{all points are in } (2x,1)\big) = (1-2x)_+^k. $$ 2. Expectation For distributions with finite support, we have $$ E(X) = \int P(X > x)dx = 1 - \int P(X \leq x)dx. $$ Integrating the distribution of $\Delta_{(k+1)}$, we obtain $$ E\left(\Delta_{(k+1)}\right) = \frac{1}{k+1}\sum_{j=1}^{k+1}{k+1 \choose j}\frac{(-1)^{j+1}}{j} = \frac{1}{k+1}\sum_{j=1}^{k+1}\frac{1}{j}. $$ The last equality is a classic representation of harmonic numbers $H_i = 1+ \frac{1}{2}+ \ldots + \frac{1}{i}$, which we demonstrate below. $$ H_{k+1} = \int_0^1 1 + x + \ldots + x^k dx = \int_0^1 \frac{1-x^{k+1}}{1-x}dx. $$ With the change of variable $u = 1-x$ and expanding the product, we obtain $$ H_{k+1} = \int_0^1\sum_{j=1}^{k+1}{ k+1 \choose j }(-1)^{j+1}u^{j-1}du = \sum_{j=1}^{k+1}{k+1 \choose j}\frac{(-1)^{j+1}}{j}. $$ 3. Alternative construction of uniform spacings In order to obtain the asymptotic distribution of the largest fragment, we will need to exhibit a classical construction of uniform spacings as exponential variables divided by their sum. The probability density of the associated order statistics $(U_{(1)}, \ldots, U_{(k)})$ is $$ f_{U_{(1)}, \ldots U_{(k)}}(u_{(1)}, \ldots, u_{(k)}) = k!, \; 0 \leq u_{(1)} \leq \ldots \leq u_{(k+1)}. $$ If we denote the uniform spacings $\Delta_i = U_{(i)} - U_{(i-1)}$, with $U_{(0)} = 0$, we obtain $$ f_{\Delta_1, \ldots \Delta_k}(\delta_1, \ldots, \delta_k) = k!, \; 0 \leq \delta_i + \ldots + \delta_k \leq 1. $$ By defining $U_{(k+1)} = 1$, we thus obtain $$ f_{\Delta_1, \ldots \Delta_{k+1}}(\delta_1, \ldots, \delta_{k+1}) = k!, \; \delta_1 + \ldots + \delta_k = 1. $$ Now, let $(X_1, \ldots, X_{k+1})$ be IID exponential random variables with mean 1, and let $S = X_1 + \ldots + X_{k+1}$. With a simple change of variable, we can see that $$f_{X_1, \ldots X_k, S}(x_1, \ldots, x_k, s) = e^{-s}.$$ Define $Y_i = X_i/S$, such that by a change of variable we obtain $$f_{Y_1, \ldots Y_k, S}(y_1, \ldots, y_k, s) = s^k e^{-s}.$$ Integrating this density with respect to $s$, we thus obtain $$ f_{Y_1, \ldots Y_k,}(y_1, \ldots, y_k) = \int_0^{\infty}s^k e^{-s}ds = k!, \; 0 \leq y_i + \ldots + y_k \leq 1, \; \text{and thus} \\ f_{Y_1, \ldots Y_{k+1},}(y_1, \ldots, y_{k+1}) = k!, \; y_1 + \ldots + y_{k+1} = 1. $$ So the joint distribution of $k+1$ uniform spacings on the interval $(0,1)$ is the same as the joint distribution of $k+1$ exponential random variables divided by their sum. We come to the following equivalence of distribution $$ \Delta_{(k+1)} \equiv \frac{X_{(k+1)}}{X_1 + \ldots + X_{k+1}}. $$ 4. Asymptotic distribution Using the equivalence above, we obtain $$ \begin{align} P\big((k+1)\Delta_{(k+1)} - \log(k+1) \leq x\big) &= P\left(X_{(k+1)} \leq (x + \log(k+1))\frac{X_1 + \ldots + X_{k+1}}{k+1}\right) \\ &= P\left(X_{(k+1)} - \log(k+1) \leq x + (x + \log(k+1))T_{k+1}\right), \end{align} $$ where $T_{k+1} = \frac{X_1+\ldots+X_{k+1}}{k+1} -1$. This variable vanishes in probability because $E\left(T_{k+1}\right) = 0$ and $Var\big(\log(k+1)T_{k+1}\big) = \frac{(\log(k+1))^2}{k+1} \downarrow 0$. Asymptotically, the distribution is the same as that of $X_{(k+1)} - \log(k+1)$. Because the $X_i$ are IID, we have $$ \begin{align} P\left(X_{(k+1)} - \log(k+1) \leq x \right) &= P\left(X_1 \leq x + \log(k+1)\right)^{k+1} \\ &= \left(1-e^{-x - \log(k+1)}\right)^{k+1} = \left(1-\frac{e^{-x}}{k+1}\right)^{k+1} \sim \exp\left\{-e^{-x}\right\}. \end{align} $$ 5. Graphical overview The plot below shows the distribution of the largest fragment for different values of $k$. For $k=10, 20, 50$, I have also overlaid the asymptotic Gumbel distribution (thin line). The Gumbel is a very bad approximation for small values of $k$ so I omit them to not overload the picture. The Gumbel approximation is good from $k \approx 50$. 6. References The proofs above are taken from references 2 and 3. The cited literature contains many more results, such as the distribution of the ordered spacings of any rank, their limit distribution and some alternative constructions of the ordered uniform spacings. The key references are not easily accessible, so I also provide links to the full text. Bairamov et al. (2010) Limit results for ordered uniform spacings , Stat papers, 51:1, pp 227-240 Holst (1980) On the lengths of the pieces of a stick broken at random , J. Appl. Prob., 17, pp 623-634 Pyke (1965) Spacings , JRSS(B) 27:3, pp. 395-449 Renyi (1953) On the theory of order statistics , Acta math Hung, 4, pp 191-231
{ "source": [ "https://stats.stackexchange.com/questions/162560", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10849/" ] }
162,988
Why is the de-facto standard sigmoid function, $\frac{1}{1+e^{-x}}$, so popular in (non-deep) neural-networks and logistic regression? Why don't we use many of the other derivable functions, with faster computation time or slower decay (so vanishing gradient occurs less). Few examples are on Wikipedia about sigmoid functions . One of my favorites with slow decay and fast calculation is $\frac{x}{1+|x|}$. EDIT The question is different to Comprehensive list of activation functions in neural networks with pros/cons as I'm only interested in the 'why' and only for the sigmoid.
Quoting myself from this answer to a different question: In section 4.2 of Pattern Recognition and Machine Learning (Springer 2006), Bishop shows that the logit arises naturally as the form of the posterior probability distribution in a Bayesian treatment of two-class classification. He then goes on to show that the same holds for discretely distributed features, as well as a subset of the family of exponential distributions. For multi-class classification the logit generalizes to the normalized exponential or softmax function. This explains why this sigmoid is used in logistic regression. Regarding neural networks, this blog post explains how different nonlinearities including the logit / softmax and the probit used in neural networks can be given a statistical interpretation and thereby a motivation. The underlying idea is that a multi-layered neural network can be regarded as a hierarchy of generalized linear models; according to this, activation functions are link functions, which in turn correspond to different distributional assumptions.
{ "source": [ "https://stats.stackexchange.com/questions/162988", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/82695/" ] }
163,008
On my website (interactivia.ro) I've added a gamification module from CaptainUp. I'm interested to find out how this gamification module influenced my website traffic. The data extracted from Google Analytics is for an year and for this period I also have (almost) daily interactions from the gamification module, something like this: https://goo.gl/fFZCJ1 My questions: - how can I find out the influence (if any) of the gamification module on my website traffic - is there a step-by-step example (in SPSS)? Thank you, Catalin
Quoting myself from this answer to a different question: In section 4.2 of Pattern Recognition and Machine Learning (Springer 2006), Bishop shows that the logit arises naturally as the form of the posterior probability distribution in a Bayesian treatment of two-class classification. He then goes on to show that the same holds for discretely distributed features, as well as a subset of the family of exponential distributions. For multi-class classification the logit generalizes to the normalized exponential or softmax function. This explains why this sigmoid is used in logistic regression. Regarding neural networks, this blog post explains how different nonlinearities including the logit / softmax and the probit used in neural networks can be given a statistical interpretation and thereby a motivation. The underlying idea is that a multi-layered neural network can be regarded as a hierarchy of generalized linear models; according to this, activation functions are link functions, which in turn correspond to different distributional assumptions.
{ "source": [ "https://stats.stackexchange.com/questions/163008", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/56821/" ] }
163,034
I must confess that I previously haven't heard of that term in any of my classes, undergrad or grad. What does it mean for a logistic regression to be Bayesian? I'm looking for an explanation with a transition from regular logistic to Bayesian logistic similar to the following: This is the equation in the linear regression model: $E(y) = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$. This is the equation in the logistic regression model: $\ln(\frac{E(y)}{1-E(y)}) = \beta_0 + \beta_1x_1 + ... + \beta_nx_n$. This is done when y is categorical. What we have done is change $E(y)$ to $\ln(\frac{E(y)}{1-E(y)})$. So what's done to the logistic regression model in Bayesian logistic regression? I'm guessing it's not something to do with the equation. This book preview seems to define, but I don't really understand. What is all this prior, likelihood stuff? What is $\alpha$? May someone please explain that part of the book or Bayesian logit model in another way? Note: This has been asked before but not answered very well I think.
Logistic regression can be described as a linear combination $$ \eta = \beta_0 + \beta_1 X_1 + ... + \beta_k X_k $$ that is passed through the link function $g$: $$ g(E(Y)) = \eta $$ where the link function is a logit function $$ E(Y|X,\beta) = p = \text{logit}^{-1}( \eta ) $$ where $Y$ take only values in $\{0,1\}$ and inverse logit functions transforms linear combination $\eta$ to this range. This is where classical logistic regression ends. However if you recall that $E(Y) = P(Y = 1)$ for variables that take only values in $\{0,1\}$, than $E(Y | X,\beta)$ can be considered as $P(Y = 1 | X,\beta)$. In this case, the logit function output could be thought as conditional probability of "success", i.e. $P(Y=1|X,\beta)$. Bernoulli distribution is a distribution that describes probability of observing binary outcome, with some $p$ parameter, so we can describe $Y$ as $$ y_i \sim \text{Bernoulli}(p) $$ So with logistic regression we look for some parameters $\beta$ that togeder with independent variables $X$ form a linear combination $\eta$. In classical regression $E(Y|X,\beta) = \eta$ (we assume link function to be identity function), however to model $Y$ that takes values in $\{0,1\}$ we need to transform $\eta$ so to fit in $[0,1]$ range. Now, to estimate logistic regression in Bayesian way you pick up some priors for $\beta_i$ parameters as with linear regression (see Kruschke et al, 2012 ), then use logit function to transform the linear combination $\eta$, so to use its output as a $p$ parameter of Bernoulli distribution that describes your $Y$ variable. So, yes, you actually use the equation and the logit link function the same way as in frequentionist case, and the rest works (e.g. choosing priors) like with estimating linear regression the Bayesian way. The simple approach for choosing priors is to choose Normal distributions (but you can also use other distributions, e.g. $t$- or Laplace distribution for more robust model) for $\beta_i$'s with parameters $\mu_i$ and $\sigma_i^2$ that are preset or taken from hierarchical priors . Now, having the model definition you can use software such as JAGS to perform Markov Chain Monte Carlo simulation for you to estimate the model. Below I post JAGS code for simple logistic model (check here for more examples). model { # setting up priors a ~ dnorm(0, .0001) b ~ dnorm(0, .0001) for (i in 1:N) { # passing the linear combination through logit function logit(p[i]) <- a + b * x[i] # likelihood function y[i] ~ dbern(p[i]) } } As you can see, the code directly translates to model definition. What the software does is it draws some values from Normal priors for a and b , then it uses those values to estimate p and finally, uses likelihood function to assess how likely is your data given those parameters (this is when you use Bayes theorem, see here for more detailed description). The basic logistic regression model can be extended to model the dependency between the predictors using a hierarchical model (including hyperpriors ). In this case you can draw $\beta_i$'s from Multivariate Normal distribution that enables us to include information about covariance $\boldsymbol{\Sigma}$ between independent variables $$ \begin{pmatrix} \beta_0 \\ \beta_1 \\ \vdots \\ \beta_k \end{pmatrix} \sim \mathrm{MVN} \left( \begin{bmatrix} \mu_0 \\ \mu_1 \\ \vdots \\ \mu_k \end{bmatrix}, \begin{bmatrix} \sigma^2_0 & \sigma_{0,1} & \ldots & \sigma_{0,k} \\ \sigma_{1,0} & \sigma^2_1 & \ldots &\sigma_{1,k} \\ \vdots & \vdots & \ddots & \vdots \\ \sigma_{k,0} & \sigma_{k,1} & \ldots & \sigma^2_k \end{bmatrix} \right)$$ ...but this is going into details, so let's stop right here. The "Bayesian" part in here is choosing priors, using Bayes theorem and defining model in probabilistic terms. See here for definition of "Bayesian model" and here for some general intuition on Bayesian approach . What you can also notice is that defining models is pretty straightforward and flexible with this approach. Kruschke, J. K., Aguinis, H., & Joo, H. (2012). The time has come: Bayesian methods for data analysis in the organizational sciences. Organizational Research Methods, 15 (4), 722-752. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2 (4), 1360–1383.
{ "source": [ "https://stats.stackexchange.com/questions/163034", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44339/" ] }
163,071
A pet saying of many statisticians is "Correlation doesn't imply causation." This is certainly true, but one thing that DOES seem implied here is that correlation has little or no value. Is this true? Is it useless to have knowledge that two variables are correlated? I can't imagine that is the case. I'm not horribly familiar with predictive analysis, but it seems that if X is a predictor of Y , it would be useful in predicting future values of Y based on X , regardless of causality. Am I incorrect in seeing value in correlation? And if not, in what situations might a statistician or data scientist use correlation without causation?
Correlation (or any other measure of association) is useful for prediction regardless of causation. Suppose that you measure a clear, stable association between two variables. What this means is that knowing the level of one variable also provides you with some information about another variable of interest, which you can use to help predict one variable as a function of the other and, most importantly, take some action based on that prediction. Taking action involves changing one or more variables, such as when making an automated recommendation or employing some medical intervention. Of course, you could make better predictions and act more effectively if you had more insight into the direct or indirect relationships between two variables. This insight may involve other variables, including spatial and temporal ones.
{ "source": [ "https://stats.stackexchange.com/questions/163071", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/80124/" ] }
163,304
What are the advantages, why would one use multiple LSTMs, stacked one side-by-side, in a deep-network? I am using a LSTM to represent a sequence of inputs as a single input. So once I have that single representation— why would I pass it through again? I am asking this because I saw this in a natural-language generation program.
I think that you are referring to vertically stacked LSTM layers (assuming the horizontal axes is the time axis. In that case the main reason for stacking LSTM is to allow for greater model complexity. In case of a simple feedforward net we stack layers to create a hierarchical feature representation of the input data to then use for some machine learning task. The same applies for stacked LSTM's. At every time step an LSTM, besides the recurrent input. If the input is already the result from an LSTM layer (or a feedforward layer) then the current LSTM can create a more complex feature representation of the current input. Now the difference between having a feedforward layer between the feature input and the LSTM layer and having another LSTM layer is that a feed forward layer (say a fully connected layer) does not receive feedback from its previous time step and thus can not account for certain patterns. Having an LSTM in stead (e.g. using a stacked LSTM representation) more complex input patterns can be described at every layer
{ "source": [ "https://stats.stackexchange.com/questions/163304", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81435/" ] }
163,388
I keep reading this and intuitively I can see this but how does one go from L2 regularization to saying that this is a Gaussian Prior analytically? Same goes for saying L1 is equivalent to a Laplacean prior. Any further references would be great.
Let us imagine that you want to infer some parameter $\beta$ from some observed input-output pairs $(x_1,y_1)\dots,(x_N,y_N)$ . Let us assume that the outputs are linearly related to the inputs via $\beta$ and that the data are corrupted by some noise $\epsilon$ : $$y_n = \beta x_n + \epsilon,$$ where $\epsilon$ is Gaussian noise with mean $0$ and variance $\sigma^2$ . This gives rise to a Gaussian likelihood: $$\prod_{n=1}^N \mathcal{N}(y_n|\beta x_n,\sigma^2).$$ Let us regularise parameter $\beta$ by imposing the Gaussian prior $\mathcal{N}(\beta|0,\lambda^{-1}),$ where $\lambda$ is a strictly positive scalar ( $\lambda$ quantifies of by how much we believe that $\beta$ should be close to zero, i.e. it controls the strength of the regularisation). Hence, combining the likelihood and the prior we simply have: $$\prod_{n=1}^N \mathcal{N}(y_n|\beta x_n,\sigma^2) \mathcal{N}(\beta|0,\lambda^{-1}).$$ Let us take the logarithm of the above expression. Dropping some constants we get: $$\sum_{n=1}^N -\frac{1}{\sigma^2}(y_n-\beta x_n)^2 - \lambda \beta^2 + \mbox{const}.$$ If we maximise the above expression with respect to $\beta$ , we get the so called maximum a-posteriori estimate for $\beta$ , or MAP estimate for short. In this expression it becomes apparent why the Gaussian prior can be interpreted as a L2 regularisation term. The relationship between the L1 norm and the Laplace prior can be understood in the same fashion. Instead of a Gaussian prior, multiply your likelihood with a Laplace prior and then take the logarithm. A good reference (perhaps slightly advanced) detailing both issues is the paper "Adaptive Sparseness for Supervised Learning", which currently does not seem easy to find online. Alternatively look at "Adaptive Sparseness using Jeffreys Prior" . Another good reference is "On Bayesian classification with Laplace priors" .
{ "source": [ "https://stats.stackexchange.com/questions/163388", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/26069/" ] }
163,915
Can someone explain to me why would anyone choose a parametric over a nonparametric statistical method for hypothesis testing or regression analysis? In my mind, it's like going for rafting and choosing a non-water resistant watch, because you may not get it wet. Why not use the tool that works on every occasion?
Rarely if ever a parametric test and a non-parametric test actually have the same null. The parametric $t$-test is testing the mean of the distribution, assuming the first two moments exist. The Wilcoxon rank sum test does not assume any moments, and tests equality of distributions instead. Its implied parameter is a weird functional of distributions, the probability that the observation from one sample is lower than the observation from the other. You can sort of talk about comparisons between the two tests under the completely specified null of identical distributions... but you have to recognize that the two tests are testing different hypotheses. The information that parametric tests bring in along with their assumption helps improving the power of the tests. Of course that information better be right, but there are few if any domains of human knowledge these days where such preliminary information does not exist. An interesting exception that explicitly says "I don't want to assume anything" is the courtroom where non-parametric methods continue to be widely popular -- and it makes perfect sense for the application. There's probably a good reason, pun intended, that Phillip Good authored good books on both non-parametric statistics and courtroom statistics . There are also testing situations where you don't have access to the microdata necessary for the nonparametric test. Suppose you were asked to compare two groups of people to gauge whether one is more obese than the other. In an ideal world, you will have height and weight measurements for everybody, and you could form a permutation test stratifying by height. In a less than ideal (i.e., real) world, you may only have the mean height and mean weight in each group (or may be some ranges or variances of these characteristics on top of the sample means). Your best bet is then to compute the mean BMI for each group and compare them if you only have the means; or assume a bivariate normal for height and weight if you have means and variances (you'd probably have to take a correlation from some external data if it did not come with your samples), form some sort of regression lines of weight on height within each group, and check whether one line is above the other.
{ "source": [ "https://stats.stackexchange.com/questions/163915", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/83064/" ] }
164,048
Since RF can handle non-linearity but can't provide coefficients, would it be wise to use random forest to gather the most important features and then plug those features into a multiple linear regression model in order to obtain their coefficients?
Since RF can handle non-linearity but can't provide coefficients, would it be wise to use Random Forest to gather the most important Features and then plug those features into a Multiple Linear Regression model in order to explain their signs? I interpret OP's one-sentence question to mean that OP wishes to understand the desirability of the following analysis pipeline: Fit a random forest to some data By some metric of variable importance from (1), select a subset of high-quality features. Using the variables from (2), estimate a linear regression model. This will give OP access to the coefficients that OP notes RF cannot provide. From the linear model in (3), qualitatively interpret the signs of the coefficient estimates. I don't think this pipeline will accomplish what you'd like. Variables that are important in random forest don't necessarily have any sort of linearly additive relationship with the outcome. This remark shouldn't be surprising: it's what makes random forest so effective at discovering nonlinear relationships. Here's an example. I created a classification problem with 10 noise features, two "signal" features, and a circular decision boundary. set.seed(1) N <- 500 x1 <- rnorm(N, sd=1.5) x2 <- rnorm(N, sd=1.5) y <- apply(cbind(x1, x2), 1, function(x) (x%*%x)<1) plot(x1, x2, col=ifelse(y, "red", "blue")) lines(cos(seq(0, 2*pi, len=1000)), sin(seq(0, 2*pi, len=1000))) And when we apply the RF model, we are not surprised to find that these features are easily picked out as important by the model. (NB: this model isn't tuned at all .) x_junk <- matrix(rnorm(N*10, sd=1.5), ncol=10) x <- cbind(x1, x2, x_junk) names(x) <- paste("V", 1:ncol(x), sep="") rf <- randomForest(as.factor(y)~., data=x, mtry=4) importance(rf) MeanDecreaseGini x1 49.762104 x2 54.980725 V3 5.715863 V4 5.010281 V5 4.193836 V6 7.147988 V7 5.897283 V8 5.338241 V9 5.338689 V10 5.198862 V11 4.731412 V12 5.221611 But when we down-select to just these two, useful features, the resulting linear model is awful. summary(badmodel <- glm(y~., data=data.frame(x1,x2), family="binomial")) The important part of the summary is the comparison of the residual deviance and the null deviance. We can see that the model does basically nothing to "move" the deviance. Moreover, the coefficient estimates are essentially zero. Call: glm(formula = as.factor(y) ~ ., family = "binomial", data = data.frame(x1, x2)) Deviance Residuals: Min 1Q Median 3Q Max -0.6914 -0.6710 -0.6600 -0.6481 1.8079 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -1.398378 0.112271 -12.455 <2e-16 *** x1 -0.020090 0.076518 -0.263 0.793 x2 -0.004902 0.071711 -0.068 0.946 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 497.62 on 499 degrees of freedom Residual deviance: 497.54 on 497 degrees of freedom AIC: 503.54 Number of Fisher Scoring iterations: 4 What accounts for the wild difference between the two models? Well, clearly the decision boundary we're trying to learn is not a linear function of the two "signal" features. Obviously if you knew the functional form of the decision boundary prior to estimating the regression, you could apply some transformation to encode the data in a way that regression could then discover... (But I've never known the form of the boundary ahead of time in any real-world problem.) Since we're only working with two signal features in this case, a synthetic data set without noise in the class labels, that boundary between classes is very obvious in our plot. But it's less obvious when working with real data in a realistic number of dimensions. Moreover, in general, random forest can fit different models to different subsets of the data. In a more complicated example, it won't be obvious what's going on from a single plot at all, and building a linear model of similar predictive power will be even harder. Because we're only concerned with two dimensions, we can make a prediction surface. As expected, the random model learns that the neighborhood around the origin is important. M <- 100 x_new <- seq(-4,4, len=M) x_new_grid <- expand.grid(x_new, x_new) names(x_new_grid) <- c("x1", "x2") x_pred <- data.frame(x_new_grid, matrix(nrow(x_new_grid)*10, ncol=10)) names(x_pred) <- names(x) y_hat <- predict(object=rf, newdata=x_pred, "vote")[,2] library(fields) y_hat_mat <- as.matrix(unstack(data.frame(y_hat, x_new_grid), y_hat~x1)) image.plot(z=y_hat_mat, x=x_new, y=x_new, zlim=c(0,1), col=tim.colors(255), main="RF Prediction surface", xlab="x1", ylab="x2") As implied by our abysmal model output, the prediction surface for the reduced-variable logistic regression model is basically flat. bad_y_hat <- predict(object=badmodel, newdata=x_new_grid, type="response") bad_y_hat_mat <- as.matrix(unstack(data.frame(bad_y_hat, x_new_grid), bad_y_hat~x1)) image.plot(z=bad_y_hat_mat, x=x_new, y=x_new, zlim=c(0,1), col=tim.colors(255), main="Logistic regression prediction surface", xlab="x1", ylab="x2") HongOoi notes that the class membership isn't a linear function of the features, but that it a linear function is under a transformation. Because the decision boundary is $1=x_1^2+x_2^2,$ if we square these features, we will be able to build a more useful linear model. This is deliberate. While the RF model can find signal in those two features without transformation, the analyst has to be more specific to get similarly helpful results in the GLM. Perhaps that's sufficient for OP: finding a useful set of transformations for 2 features is easier than 12. But my point is that even if a transformation will yield a useful linear model, RF feature importance won't suggest the transformation on its own.
{ "source": [ "https://stats.stackexchange.com/questions/164048", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/83400/" ] }
164,233
I'm trying to understand the differences between GBM & Adaboost. These are what I've understood so far: There are both boosting algorithms, which learns from previous model's errors and finally make a weighted sum of the models. GBM and Adaboost are pretty similar except for their loss functions. But still it is difficult for me to grab an idea of differences between them. Can someone give me intuitive explanations?
I found this introduction which provides some intuitive explanations: In Gradient Boosting, ‘shortcomings’ (of existing weak learners) are identified by gradients . In AdaBoost, ‘shortcomings’ are identified by high-weight data points . By means of an exponential loss function, AdaBoost gives more weights to those samples fitted worse in previous steps. Today, AdaBoost is regarded as a special case of Gradient Boosting in terms of loss function. Historically it preceded Gradient Boosting to which it was later generalized, as shown in the history provided in the introduction: Invent AdaBoost, the first successful boosting algorithm [Freund et al., 1996, Freund and Schapire, 1997] Formulate AdaBoost as gradient descent with a special loss function [Breiman et al., 1998, Breiman, 1999] Generalize AdaBoost to Gradient Boosting in order to handle a variety of loss functions [Friedman et al., 2000, Friedman, 2001]
{ "source": [ "https://stats.stackexchange.com/questions/164233", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81518/" ] }
164,381
Im following a tutorial here: http://www.r-bloggers.com/computing-and-visualizing-pca-in-r/ to gain a better understanding of PCA. The tutorial uses the Iris dataset and applies a log transform prior to PCA: Notice that in the following code we apply a log transformation to the continuous variables as suggested by [1] and set center and scale equal to TRUE in the call to prcomp to standardize the variables prior to the application of PCA. Could somebody explain to me in plain English why you first use the log function on the the first four columns of the Iris dataset. I understand it has something to do with making data relative but am confused what's exactly the function of log, center and scale. The reference [1] above is to Venables and Ripley, Modern applied statistics with S-PLUS , Section 11.1 that briefly says: The data are physical measurements, so a sound initial strategy is to work on log scale. This has been done throughout.
The iris data set is a fine example to learn PCA. That said, the first four columns describing length and width of sepals and petals are not an example of strongly skewed data. Therefore log-transforming the data does not change the results much, since the resulting rotation of the principal components is quite unchanged by log-transformation. In other situations log-transformation is a good choice. We perform PCA to get insight of the general structure of a data set. We center, scale and sometimes log-transform to filter off some trivial effects, which could dominate our PCA. The algorithm of a PCA will in turn find the rotation of each PC to minimize the squared residuals, namely the sum of squared perpendicular distances from any sample to the PCs. Large values tend to have high leverage. Imagine injecting two new samples into the iris data. A flower with 430 cm petal length and one with petal length of 0.0043 cm. Both flowers are very abnormal being 100 times larger and 1000 times smaller respectively than average examples. The leverage of the first flower is huge, such that the first PCs mostly will describe the differences between the large flower and any other flower. Clustering of species is not possible due to that one outlier. If the data are log-transformed, the absolute value now describes the relative variation. Now the small flower is the most abnormal one. Nonetheless it is possible to both contain all samples in one image and provide a fair clustering of the species. Check out this example: data(iris) #get data #add two new observations from two new species to iris data levels(iris[,5]) = c(levels(iris[,5]),"setosa_gigantica","virginica_brevis") iris[151,] = list(6,3, 430 ,1.5,"setosa_gigantica") # a big flower iris[152,] = list(6,3,.0043,1.5 ,"virginica_brevis") # a small flower #Plotting scores of PC1 and PC" without log transformation plot(prcomp(iris[,-5],cen=T,sca=T)$x[,1:2],col=iris$Spec) #Plotting scores of PC1 and PC2 with log transformation plot(prcomp(log(iris[,-5]),cen=T,sca=T)$x[,1:2],col=iris$Spec)
{ "source": [ "https://stats.stackexchange.com/questions/164381", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/82050/" ] }
164,876
When training a neural network, what difference does it make to set: batch size to $a$ and number of iterations to $b$ vs. batch size to $c$ and number of iterations to $d$ where $ ab = cd $? To put it otherwise, assuming that we train the neural network with the same amount of training examples, how to set the optimal batch size and number of iterations? (where batch size * number of iterations = number of training examples shown to the neural network, with the same training example being potentially shown several times) I am aware that the higher the batch size, the more memory space one needs, and it often makes computations faster. But in terms of performance of the trained network, what difference does it make?
From Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. https://arxiv.org/abs/1609.04836 : The stochastic gradient descent method and its variants are algorithms of choice for many Deep Learning tasks. These methods operate in a small-batch regime wherein a fraction of the training data, usually 32--512 data points, is sampled to compute an approximation to the gradient. It has been observed in practice that when using a larger batch there is a significant degradation in the quality of the model, as measured by its ability to generalize. There have been some attempts to investigate the cause for this generalization drop in the large-batch regime, however the precise answer for this phenomenon is, hitherto unknown. In this paper, we present ample numerical evidence that supports the view that large-batch methods tend to converge to sharp minimizers of the training and testing functions -- and that sharp minima lead to poorer generalization. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. We also discuss several empirical strategies that help large-batch methods eliminate the generalization gap and conclude with a set of future research ideas and open questions. […] The lack of generalization ability is due to the fact that large-batch methods tend to converge to sharp minimizers of the training function . These minimizers are characterized by large positive eigenvalues in $\nabla^2 f(x)$ and tend to generalize less well. In contrast, small-batch methods converge to flat minimizers characterized by small positive eigenvalues of $\nabla^2 f(x)$. We have observed that the loss function landscape of deep neural networks is such that large-batch methods are almost invariably attracted to regions with sharp minima and that, unlike small batch methods, are unable to escape basins of these minimizers. […] Also, some good insights from Ian Goodfellow answering to why do not use the whole training set to compute the gradient? on Quora: The size of the learning rate is limited mostly by factors like how curved the cost function is. You can think of gradient descent as making a linear approximation to the cost function, then moving downhill along that approximate cost. If the cost function is highly non-linear (highly curved) then the approximation will not be very good for very far, so only small step sizes are safe. You can read more about this in Chapter 4 of the deep learning textbook, on numerical computation: http://www.deeplearningbook.org/contents/numerical.html When you put m examples in a minibatch, you need to do O(m) computation and use O(m) memory, but you reduce the amount of uncertainty in the gradient by a factor of only O(sqrt(m)). In other words, there are diminishing marginal returns to putting more examples in the minibatch. You can read more about this in Chapter 8 of the deep learning textbook, on optimization algorithms for deep learning: http://www.deeplearningbook.org/contents/optimization.html Also, if you think about it, even using the entire training set doesn’t really give you the true gradient. The true gradient would be the expected gradient with the expectation taken over all possible examples, weighted by the data generating distribution. Using the entire training set is just using a very large minibatch size, where the size of your minibatch is limited by the amount you spend on data collection, rather than the amount you spend on computation. Related: Batch gradient descent versus stochastic gradient descent
{ "source": [ "https://stats.stackexchange.com/questions/164876", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12359/" ] }
165,194
I would like to hierarchically cluster my data, but rather than using Euclidean distance, I'd like to use correlation. Also, since the correlation coefficient ranges from -1 to 1, with both -1 and 1 denoting "co-regulation" in my study, I am treating both -1 and 1 as d = 0. So my calculation is $\ d = 1-|r|$ I read in a separate question (regarding k-means clustering), that you should convert r into true euclidean d using the cosine theorem: $d = \sqrt{2(1-r)}$ What is the most accurate way to convert correlation to distance for hierarchical clustering?
Requirements for hierarchical clustering Hierarchical clustering can be used with arbitrary similarity and dissimilarity measures. (Most tools expect a dissimilarity, but will allow negative values - it's up to you to ensure whether small or large valued will be preferred.). Only methods based on centroids or variance (such as Ward's method) are special, and should be used with squared Euclidean. (To understand why, please study these linkages carefully.) Single-linkage, average-linkage, complete-linkage are not much affected, it will still be the minimum / average / maximum of the pairwise dissimilarities. Correlation as distance measure If you preprocess your data ( $n$ observations, $p$ features) such that each feature has $\mu=0$ and $\sigma=1$ (which disallows constant features!), then correlation reduces to cosine: $$ \text{Corr} (X,Y) = \frac{\text{Cov}(X, Y)} {\sigma_X \sigma_Y} = \frac{\mathbb{E} \left[ (X - \mu_X) (Y - \mu_Y) \right]} {\sigma_X \sigma_Y} = \mathbb{E} [XY] = \frac1n \left<X, Y\right> $$ Under the same conditions, squared Euclidean distance also reduces to cosine: $$ d_\text{Euclid}^2(X,Y) = \sum (X_i - Y_i)^2 = \sum X_i^2 + \sum Y_i^2 - 2 \sum X_i Y_i \\ = 2n - 2\left<X, Y\right> = 2n \left[1 - \text{Corr}(X, Y)\right] $$ Therefore, unless your data is degenerate, using correlation for hierarchical clustering should be okay. Just preprocess it as explained above, then use squared Euclidean distance.
{ "source": [ "https://stats.stackexchange.com/questions/165194", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/84115/" ] }
166,323
So I've been reading a lot about how to correctly interpret a P-value, and from what I've read, the p-value says NOTHING about the probability that the null hypothesis is true or false. However, when reading the following statement: The p – value represents the probability of making a type I error, or rejecting the null hypothesis when it is true. The smaller the p value, the smaller is the probability that you would be wrongly rejecting the null hypothesis. EDIT: And then 5 minutes later I read: Incorrect interpretations of P values are very common. The most common mistake is to interpret a P value as the probability of making a mistake by rejecting a true null hypothesis (a Type I error). This confused me. Which one is correct? And can anyone please explain how to correctly interpret the p-value and how it properly relates back to probability of making a type I error?
Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What follows if we fail to reject the null hypothesis? , it is similar to 'proof by contradiction' in mathematics. So if we want to find 'statistical evidence' then we assume the opposite, which we denote $H_0$ of what we try to proof which we call $H_1$. After this we draw a sample, and from the sample we compute a so-called test-statistic (e.g. a t-value in a t-test). Then, as we assume that $H_0$ is true and that our sample is randomly drawn from the distribution under $H_0$, we can compute the probability of observing values that exceed or equal the value derived from our (random) sample. This probability is called the p-value. If this value is ''small enough'', i.e. smaller than the significance level thase we have choosen, then we reject $H_0$ and we consider to $H_1$ is 'statistically proven'. Several things are important in this way of doing: we have derived probabilities under the assumption that $H_0$ is true we have taken a random sample from the distrubtion that was assumed under $H_0$ we decide to have found evidence for $H_1$ if the test-statistic derived from the random sample has a low probability of being exceeded. So it is not impossible that it is exceeded while $H_0$ is true and in these cases we make a type I error. So what is a type I error: a type I error is made when the sample, randomly drawn from $H_0$, leads to the conclusion that $H_0$ is false while in reality it is true. Note that this implies that a p-value is not the probability of a type I error . Indeed, a type I error is a wrong decision by the test and the decision can only be made by comparing the p-value to the choosen significance level, with a p-value alone one can not make a decision, it is only after comparing the p-value to the choosen significance level that a decision is made , and as long as no decision is made, type I error is not even defined. What then is the p-value ? The potentially wrong rejection of $H_0$ is due to the fact that we draw a random sample under $H_0$, so it could be that we have ''bad luck'' by drawing the sample, and that this ''bad luck'' leads to a false rejection of $H_0$. So the p-value (although this is not fully correct) is more like the probability of drawing a ''bad sample''. The correct interpretation of the p-value is that it is the probability that the test-statistic exceeds or equals the value of the test-statistic derived from a randomly drawn sample under $H_0$ False discovery rate (FDR) As explained above, each time the null hypothesis is rejected, one considers this as 'statistical evidence' for $H_1$. So we have found new scientific knowledge, therefore it is called a discovery . Also explained above is that we can make false discoveries (i.e. falsely rejecting $H_0$) when we make a type I error. In that case we have a false belief of a scientific truth. We only want to discover really true things and therefore one tries to keep the false discoveries to a minimum, i.e. one will control for a type I error. It is not so hard to see that the probability of a type I error is the chosen significance level $\alpha$. So in order to control for type I errors, one fixes an $\alpha$-level reflecting your willingness to accept ''false evidence''. Intuitively, this means that if we draw a huge number of samples, and with each sample we perform the test, then a fraction $\alpha$ of these tests will lead to a wrong conclusion. It is important to note that we're 'averaging over many samples' ; so same test, many samples. If we use the same sample to do many different tests then we have a multiple testing error (see my anser on Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems? ). In that case one can control the $\alpha$ inflation using techniques to control the family-wise error rate (FWER) , like e.g. a Bonferroni correction. A different approach than FWER is to control the false discovery rate (FDR) . In that case one controls the number of false discoveries (FD) among all discoveries (D), so one controls $\frac{FD}{D}$, D is the number of rejected $H_0$. So the type I error probability has to do with executing the same test on many different samples. For a huge number of samples the type I error probability will converge to the number of samples leading to a false rejection divided by the total number of samples drawn . The FDR has to do with many tests on the same sample and for a huge number of tests it will converge to the number of tests where a type I error is made (i.e. the number of false discoveries) divided by total the number of rejections of $H_0$ (i.e. the total number of discoveries) . Note that, comparing the two paragraphs above: The context is different; one test and many samples versus many tests and one sample. The denominator for computing the type I error probability is clearly different from the denominator for computing the FDR. The numerators are similar in a way, but have a different context. The FDR tells you that, if you perform many tests on the same sample and you find 1000 discoveries (i.e. rejections of $H_0$) then with an FDR of 0.38 you will have $0.38 \times 1000$ false discoveries.
{ "source": [ "https://stats.stackexchange.com/questions/166323", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/83967/" ] }
167,051
As one becomes interested in statistics, the dichotomy "Frequentist" vs. "Bayesian" soon becomes commonplace (and who hasn't read Nate Silver's The Signal and the Noise , anyway?). In talks and introductory courses, the point of view is overwhelmingly frequentist ( MLE , $p$ values), but there tends to be a tiny fraction of time dedicated to admire Bayes formula and touch upon the idea of a prior distribution , usually tangentially. The tone employed to discuss Bayesian statistics oscillates between respect for its conceptual underpinnings, and a hint of skepticism regarding the chasm between lofty objectives, and arbitrariness in the selection of the prior distribution, or eventual use of frequentist maths after all. Sentences such as "if you are a hard-core Bayesian..." abound. The question is, Who are the Bayesians today? Are they some select academic institutions where you know that if you go there you will become a Bayesian? If so, are they specially sought after? Are we referring to just a few respected statisticians and mathematicians, and if so who are they? Do they even exist as such, these pure "Bayesians"? Would they happily accept the label? Is it always a flattering distinction? Are they mathematicians with peculiar slides in meetings, deprived of any $p$ values and confidence intervals, easily spotted on the brochure? How much of a niche is being a "Bayesian"? Are we referring to a minority of statisticians? Or is current Bayesian-ism equated with machine learning applications? ... Or even more likely, is Bayesian statistics not so much a branch of statistics, but rather an epistemological movement that transcends the ambit of probability calculations into a philosophy of science? In this regard, all scientists would be Bayesian at heart... but there would be no such thing as a pure Bayesian statistician impermeable to frequentist techniques (or contradictions).
I'm going to take your questions in order: The question is, Who are the Bayesians today? Anybody who does Bayesian data analysis and self-identifies as "Bayesian". Just like a programmer is someone who programs and self-identifies as a "programmer". A slight difference is that for historical reasons Bayesian has ideological connotations, because of the often heated argument between proponents of "frequentist" interpretations of probability and proponents of "Bayesian" interpretations of probability. Are they some select academic institutions, where you know that if you go there you will become a Bayesian? No, just like other parts of statistics you just need a good book (and perhaps a good teacher). If so, are they specially sought after? Bayesian data analysis is a very useful tool when doing statistical modeling, which I imagine is a pretty sought-after skill, (even if companies perhaps aren't specifically looking for "Bayesians"). Are we referring to just a few respected statisticians and mathematicians, and if so who are they? There are many respected statisticians that I believe would call themselves Bayesians , but those are not the Bayesians. Do they even exist as such, these pure "Bayesians"? That's a bit like asking "Do these pure programmers exist"? There is an amusing article called 46656 Varieties of Bayesians , and sure there is a healthy argument among "Bayesians" regarding many foundational issues. Just like programmers can argue over the merits of different programming techniques. (BTW, pure programmers program in Haskell). Would they happily accept the label? Some do, some don't. When I discovered Bayesian data analysis I thought it was the best since sliced bread (I still do) and I was happy to call myself a "Bayesian" (not least to irritate the p-value people at my department). Nowadays I don't like the term, I think it might alienate people as it makes Bayesian data analysis sound like some kind of cult, which it isn't, rather than a useful method to have in your statistical toolbox. Is it always a flattering distinction? Nope! As far as I know, the term "Bayesian" was introduced by the famous statistician Fisher as a derogatory term. Before that it was called "inverse probability" or just "probability". Are they mathematicians with peculiar slides in meetings, deprived of any p values and confidence intervals, easily spotted on the brochure? Well, there are conferences in Bayesian statistics, and I don't think they include that many p-values. Whether you'll find the slides peculiar will depend on your background... How much of a niche is being a "Bayesian"? Are we referring to a minority of statisticians? I still think a minority of statisticians deal with Bayesian statistics, but I also think the proportion is growing. Or is current Bayesian-ism equated with machine learning applications? Nope, but Bayesian models are used a lot in machine learning. Here is a great machine learning book that presents machine learning from a Bayesian/probibalistic perspective: http://www.cs.ubc.ca/~murphyk/MLbook/ Hope that answered most of the questions :) Update: [C]ould you please consider adding a list of specific techniques or premises that distinguish Bayesian statistics? What distinguish Bayesian statistics is the use of Bayesian models :) Here is my spin on what a Bayesian model is : A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model). Now, Bayesian models can be tricky to fit , and there is a host of different computational techniques that are used for this. But these techniques are not Bayesian in themselves. To namedrop some computational techniques: Markov chain Monte Carlo Metropolis-Hastings Gibbs sampling Hamiltonian Monte Carlo Variational Bayes Approximate Bayesian computation Particle filters Laplace approximation And so on... Who was the famous statistician who introduced the term 'Bayesian' as derogatory? It was supposedly Ronald Fisher. The paper When did Bayesian inference become "Bayesian"? gives the history of the term "Bayesian".
{ "source": [ "https://stats.stackexchange.com/questions/167051", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67822/" ] }
167,066
I'm performing a study where I'm selecting kernel type and hyperparameters in an inner CV loop and an outer loop doing 10-fold CV (using SVR). The output is 10 trained models and performance measures. My question is where do I go from here. When I train a new model with the complete dataset using the selected kernel (by either the hyperparameters that gave the min error during the 10-fold CV or finding the optimal ones with the selected kernel for the complete dataset) the final model I end up with is not validated against training data. Is it reasonable to do this and use the average error previously obtained from 10-fold CV as an "informal" performance estimate since the model is trained on a slightly larger dataset? How would I word this in a journal paper? My thesis advisor is questioning it for one.
I'm going to take your questions in order: The question is, Who are the Bayesians today? Anybody who does Bayesian data analysis and self-identifies as "Bayesian". Just like a programmer is someone who programs and self-identifies as a "programmer". A slight difference is that for historical reasons Bayesian has ideological connotations, because of the often heated argument between proponents of "frequentist" interpretations of probability and proponents of "Bayesian" interpretations of probability. Are they some select academic institutions, where you know that if you go there you will become a Bayesian? No, just like other parts of statistics you just need a good book (and perhaps a good teacher). If so, are they specially sought after? Bayesian data analysis is a very useful tool when doing statistical modeling, which I imagine is a pretty sought-after skill, (even if companies perhaps aren't specifically looking for "Bayesians"). Are we referring to just a few respected statisticians and mathematicians, and if so who are they? There are many respected statisticians that I believe would call themselves Bayesians , but those are not the Bayesians. Do they even exist as such, these pure "Bayesians"? That's a bit like asking "Do these pure programmers exist"? There is an amusing article called 46656 Varieties of Bayesians , and sure there is a healthy argument among "Bayesians" regarding many foundational issues. Just like programmers can argue over the merits of different programming techniques. (BTW, pure programmers program in Haskell). Would they happily accept the label? Some do, some don't. When I discovered Bayesian data analysis I thought it was the best since sliced bread (I still do) and I was happy to call myself a "Bayesian" (not least to irritate the p-value people at my department). Nowadays I don't like the term, I think it might alienate people as it makes Bayesian data analysis sound like some kind of cult, which it isn't, rather than a useful method to have in your statistical toolbox. Is it always a flattering distinction? Nope! As far as I know, the term "Bayesian" was introduced by the famous statistician Fisher as a derogatory term. Before that it was called "inverse probability" or just "probability". Are they mathematicians with peculiar slides in meetings, deprived of any p values and confidence intervals, easily spotted on the brochure? Well, there are conferences in Bayesian statistics, and I don't think they include that many p-values. Whether you'll find the slides peculiar will depend on your background... How much of a niche is being a "Bayesian"? Are we referring to a minority of statisticians? I still think a minority of statisticians deal with Bayesian statistics, but I also think the proportion is growing. Or is current Bayesian-ism equated with machine learning applications? Nope, but Bayesian models are used a lot in machine learning. Here is a great machine learning book that presents machine learning from a Bayesian/probibalistic perspective: http://www.cs.ubc.ca/~murphyk/MLbook/ Hope that answered most of the questions :) Update: [C]ould you please consider adding a list of specific techniques or premises that distinguish Bayesian statistics? What distinguish Bayesian statistics is the use of Bayesian models :) Here is my spin on what a Bayesian model is : A Bayesian model is a statistical model where you use probability to represent all uncertainty within the model, both the uncertainty regarding the output but also the uncertainty regarding the input (aka parameters) to the model. The whole prior/posterior/Bayes theorem thing follows on this, but in my opinion, using probability for everything is what makes it Bayesian (and indeed a better word would perhaps just be something like probabilistic model). Now, Bayesian models can be tricky to fit , and there is a host of different computational techniques that are used for this. But these techniques are not Bayesian in themselves. To namedrop some computational techniques: Markov chain Monte Carlo Metropolis-Hastings Gibbs sampling Hamiltonian Monte Carlo Variational Bayes Approximate Bayesian computation Particle filters Laplace approximation And so on... Who was the famous statistician who introduced the term 'Bayesian' as derogatory? It was supposedly Ronald Fisher. The paper When did Bayesian inference become "Bayesian"? gives the history of the term "Bayesian".
{ "source": [ "https://stats.stackexchange.com/questions/167066", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/85859/" ] }
167,827
In my econometric textbook(Introductory Econometrics) covering OLS, the author write, "SSR must fall when another explanatory variable is added." Why is it?
Assuming you have a linear regression model, for easy notation consider first one then two covariables. This generalizes to two sets of covariables. The first model is $$ I \colon y_i=\beta_0 + \beta_1 x_{1i}+\epsilon_{i} $$ the second model is $$ II \colon y_i = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i} + \epsilon_i $$ This is solved by minimizing sum of squared residuals, for model one we want to minimize $\text{SSR}_1 = \sum_i (y_i-\beta_0-\beta_1 x_{1i})^2$ and for model two you want to minimize $\text{SSR}_2 = \sum_i (y_i-\beta_0-\beta_1 x_{1i}-\beta_2 x_{2i})^2$. Lets say you have found the correct estimators for model 1, then you can obtain that exact same residual sum squares in model two by choosing the same values for $\beta_0, \beta_1$ and letting $\beta_2=0$. Now you can find, possibly, a lower sum squares residual by searching for a better value for $\beta_2$. To summarize, the models are nested, in the sense that everything we can model with model 1 can be matched by model two, model two is more general than model 1. So, in the optimization, we have larger freedom with model two so can always find a better solution. This has really nothing to do with statistics but is a general fact about optimization.
{ "source": [ "https://stats.stackexchange.com/questions/167827", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/71695/" ] }
168,064
In recent years, Convolutional Neural Networks (CNNs) have become the state-of-the-art for object recognition in computer vision. Typically, a CNN consists of several convolutional layers, followed by two fully-connected layers. An intuition behind this is that the convolutional layers learn a better representation of the input data, and the fully connected layers then learn to classify this representation based into a set of labels. However, before CNNs started to dominate, Support Vector Machines (SVMs) were the state-of-the-art. So it seems sensible to say that an SVM is still a stronger classifier than a two-layer fully-connected neural network. Therefore, I am wondering why state-of-the-art CNNs tend to use the fully connected layers for classification rather than an SVM? In this way, you would have the best of both worlds: a strong feature representation, and a strong classifier, rather than a strong feature representation but only a weak classifier... Any ideas?
What is an SVM, anyway? I think the answer for most purposes is “the solution to the following optimization problem”: $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal H} \frac{1}{n} \sum_{i=1}^n \ell_\mathit{hinge}(f(x_i), y_i) \, + \lambda \lVert f \rVert_{\mathcal H}^2 \\ \ell_\mathit{hinge}(t, y) = \max(0, 1 - t y) ,\end{split} \tag{SVM} $$ where $\mathcal H$ is a reproducing kernel Hilbert space, $y$ is a label in $\{-1, 1\}$ , and $t = f(x) \in \mathbb R$ is a “decision value”; our final prediction will be $\operatorname{sign}(t)$ . In the simplest case, $\mathcal H$ could be the space of affine functions $f(x) = w \cdot x + b$ , and $\lVert f \rVert_{\mathcal H}^2 = \lVert w \rVert^2 + b^2$ . (Handling of the offset $b$ varies depending on exactly what you’re doing, but that’s not important for our purposes.) In the ‘90s through the early ‘10s, there was a lot of work on solving this particular optimization problem in various smart ways, and indeed that’s what LIBSVM / LIBLINEAR / SVMlight / ThunderSVM / ... do. But I don’t think that any of these particular algorithms are fundamental to “being an SVM,” really. Now, how do we train a deep network? Well, we try to solve something like, say, $$ \begin{split} \operatorname*{arg\,min}_{f \in \mathcal F} \frac1n \sum_{i=1}^n \ell_\mathit{CE}(f(x_i), y) + R(f) \\ \ell_\mathit{CE}(p, y) = - y \log(p) - (1-y) \log(1 - p) ,\end{split} \tag{$\star$} $$ where now $\mathcal F$ is the set of deep nets we consider, which output probabilities $p = f(x) \in [0, 1]$ . The explicit regularizer $R(f)$ might be an L2 penalty on the weights in the network, or we might just use $R(f) = 0$ . Although we could solve (SVM) up to machine precision if we really wanted, we usually can’t do that for $(\star)$ when $\mathcal F$ is more than one layer; instead we use stochastic gradient descent to attempt at an approximate solution. If we take $\mathcal F$ as a reproducing kernel Hilbert space and $R(f) = \lambda \lVert f \rVert_{\mathcal F}^2$ , then $(\star)$ becomes very similar to (SVM), just with cross-entropy loss instead of hinge loss: this is also called kernel logistic regression. My understanding is that the reason SVMs took off in a way kernel logistic regression didn’t is largely due to a slight computational advantage of the former (more amenable to these fancy algorithms), and/or historical accident; there isn’t really a huge difference between the two as a whole, as far as I know. (There is sometimes a big difference between an SVM with a fancy kernel and a plain linear logistic regression, but that’s comparing apples to oranges.) So, what does a deep network using an SVM to classify look like? Well, that could mean some other things, but I think the most natural interpretation is just using $\ell_\mathit{hinge}$ in $(\star)$ . One minor issue is that $\ell_\mathit{hinge}$ isn’t differentiable at $\hat y = y$ ; we could instead use $\ell_\mathit{hinge}^2$ , if we want. (Doing this in (SVM) is sometimes called “L2-SVM” or similar names.) Or we can just ignore the non-differentiability; the ReLU activation isn’t differentiable at 0 either, and this usually doesn’t matter. This can be justified via subgradients , although note that the correctness here is actually quite subtle when dealing with deep networks. An ICML workshop paper – Tang, Deep Learning using Linear Support Vector Machines , ICML 2013 workshop Challenges in Representation Learning – found using $\ell_\mathit{hinge}^2$ gave small but consistent improvements over $\ell_\mathit{CE}$ on the problems they considered. I’m sure others have tried (squared) hinge loss since in deep networks, but it certainly hasn’t taken off widely. (You have to modify both $\ell_\mathit{CE}$ as I’ve written it and $\ell_\mathit{hinge}$ to support multi-class classification, but in the one-vs-rest scheme used by Tang, both are easy to do.) Another thing that’s sometimes done is to train CNNs in the typical way, but then take the output of a late layer as "features" and train a separate SVM on that. This was common in early days of transfer learning with deep features, but is I think less common now. Something like this is also done sometimes in other contexts, e.g. in meta-learning by Lee et al., Meta-Learning with Differentiable Convex Optimization , CVPR 2019, who actually solved (SVM) on deep network features and backpropped through the whole thing. (They didn't, but you can even do this with a nonlinear kernel in $\mathcal H$ ; this is also done in some other "deep kernels" contexts.) It’s a very cool approach – one that I've also worked on – and in certain domains this type of approach makes a ton of sense, but there are some pitfalls, and I don’t think it’s very applicable to a typical "plain classification" problem.
{ "source": [ "https://stats.stackexchange.com/questions/168064", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/72307/" ] }
168,622
In traditional statistics, while building a model, we check for multicollinearity using methods such as estimates of the variance inflation factor (VIF), but in machine learning, we instead use regularization for feature selection and don't seem to check whether features are correlated at all. Why do we do that?
Considering multicollineariy is important in regression analysis because, in extrema , it directly bears on whether or not your coefficients are uniquely identified in the data. In less severe cases, it can still mess with your coefficient estimates; small changes in the data used for estimation may cause wild swings in estimated coefficients. These can be problematic from an inferential standpoint: If two variables are highly correlated, increases in one may be offset by decreases in another so the combined effect is to negate each other. With more than two variables, the effect can be even more subtle, but if the predictions are stable, that is often enough for machine learning applications. Consider why we regularize in a regression context: We need to constrict the model from being too flexible. Applying the correct amount of regularization will slightly increase the bias for a larger reduction in variance. The classic example of this is adding polynomial terms and interaction effects to a regression: In the degenerate case, the prediction equation will interpolate data points, but probably be terrible when attempting to predict the values of unseen data points. Shrinking those coefficients will likely minimize or entirely eliminate some of those coefficients and improve generalization. A random forest, however, could be seen to have a regularization parameter through the number of variables sampled at each split: you get better splits the larger the mtry (more features to choose from; some of them are better than others), but that also makes each tree more highly correlated with each other tree, somewhat mitigating the diversifying effect of estimating multiple trees in the first place. This dilemma compels one to find the right balance, usually achieved using cross-validation. Importantly, and in contrast to a regression analysis, the predictions of the random forest model are not harmed by highly collinear variables: even if two of the variables provide the same child node purity, you can just pick one. Likewise, for something like an SVM, you can include more predictors than features because the kernel trick lets you operate solely on the inner product of those feature vectors. Having more features than observations would be a problem in regressions, but the kernel trick means we only estimate a coefficient for each exemplar, while the regularization parameter $C$ reduces the flexibility of the solution -- which is decidedly a good thing, since estimating $N$ parameters for $N$ observations in an unrestricted way will always produce a perfect model on test data -- and we come full circle, back to the ridge/LASSO/elastic net regression scenario where we have the model flexibility constrained as a check against an overly optimistic model. A review of the KKT conditions of the SVM problem reveals that the SVM solution is unique, so we don't have to worry about the identification problems which arose in the regression case. Finally, consider the actual impact of multicollinearity. It doesn't change the predictive power of the model (at least, on the training data) but it does screw with our coefficient estimates. In most ML applications, we don't care about coefficients themselves , just the loss of our model predictions, so in that sense, checking VIF doesn't actually answer a consequential question. (But if a slight change in the data causes a huge fluctuation in coefficients [a classic symptom of multicollinearity], it may also change predictions, in which case we do care -- but all of this [we hope!] is characterized when we perform cross-validation, which is a part of the modeling process anyway.) A regression is more easily interpreted, but interpretation might not be the most important goal for some tasks.
{ "source": [ "https://stats.stackexchange.com/questions/168622", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86752/" ] }
168,666
The learning rate parameter ($\nu \in [0,1]$) in Gradient Boosting shrinks the contribution of each new base model -typically a shallow tree- that is added in the series. It was shown to dramatically increase test set accuracy, which is understandable as with smaller steps, the minimum of the loss function can be attained more precisely. I don't get why the learning rate is considered a regularization parameter ? Citing the Elements of Statistical Learning , section 10.12.1, p.364: Controlling the number of trees is not the only possible regularization strategy. As with ridge regression and neural networks, shrinkage techniques can be employed as well. Smaller values of $\nu$ (more shrinkage) result in larger training risk for the same number of iterations $M$. Thus, both $\nu$ and $M$ control prediction risk on the training data. Regularization means "way to avoid overfitting", so it is clear that the number of iterations $M$ is crucial in that respect (a $M$ that is too high leads to overfitting). But: Smaller values of $\nu$ (more shrinkage) result in larger training risk for the same number of iterations $M$. just means that with low learning rates, more iterations are needed to achieve the same accuracy on the training set. So how does that relate to overfitting?
Suppose you are trying to minimize the objective function via number of iterations. And current value is $100.0$. In given data set, there are no "irreducible errors" and you can minimize the loss to $0.0$ for your training data. Now you have two ways to do it. The first way is "large learning rate" and few iterations. Suppose you can reduce loss by $10.0$ in each iteration, then, in $10$ iterations, you can reduce the loss to $0.0$. The second way would be "slow learning rate" but more iterations. Suppose you can reduce loss by $1.0$ in each iteration and you need $100$ iteration to have 0.0 loss on your training data. Now think about this: are the two approaches equal? and if not which is better in optimization context and machine learning context ? In optimization literature , the two approaches are the same. As they both converge to optimal solution . On the other hand, in machine learning , they are not equal. Because in most cases we do not make the loss in training set to $0$ which will cause over-fitting. We can think about the first approach as a "coarse level grid search", and second approach as a "fine level grid search". Second approach usually works better, but needs more computational power for more iterations. To prevent over-fitting, we can do different things, the first way would be restrict number of iterations, suppose we are using the first approach, we limit number of iterations to be 5. At the end, the loss for training data is $50$. (BTW, this would be very strange from the optimization point of view, which means we can future improve our solution / it is not converged, but we chose not to. In optimization, usually we explicitly add constraints or penalization terms to objective function, but usually not limit number of iterations.) On the other hand, we can also use second approach: if we set learning rate to be small say reduce $0.1$ loss for each iteration, although we have large number of iterations say $500$ iterations, we still have not minimized the loss to $0.0$. This is why small learning rate is sort of equal to "more regularizations". Here is an example of using different learning rate on an experimental data using xgboost . Please check follwoing two links to see what does eta or n_iterations mean. Parameters for Tree Booster XGBoost Control overfitting For the same number of iterations, say $50$. A small learning rate is "under-fitting" (or the model has "high bias"), and a large learning rate is "over-fitting" (or the model has "high variance"). PS. the evidence of under-fitting is both training and testing set have large error, and the error curve for training and testing are close to each other. The sign of over-fitting is training set's error is very low and testing set is very high, two curves are far away from each other.
{ "source": [ "https://stats.stackexchange.com/questions/168666", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86774/" ] }
168,681
I have a number of observations that occur around my city (a small area), and several of them have latitude and longitude. I have been looking into predicting the latitude/longitude of the observations that are not geographically tagged, but I am unsure of the best approach. One approach I have seen divides an area smaller sections and predicts the likelihood that a point will fall into any specific section. Wing and Baldridge, 2011: https://scholar.google.com/scholar?cluster=7781354766705840422&hl=en&as_sdt=0,11 Han, Cook, and Baldwin, 2014: https://www.jair.org/papers/paper4200.html Do you guys know of any alternative approaches to solve this problem? It seems that estimating latitude and longitude separately would be a mistake, since they are related. The grid approach would address that, but it might lack precision.
Suppose you are trying to minimize the objective function via number of iterations. And current value is $100.0$. In given data set, there are no "irreducible errors" and you can minimize the loss to $0.0$ for your training data. Now you have two ways to do it. The first way is "large learning rate" and few iterations. Suppose you can reduce loss by $10.0$ in each iteration, then, in $10$ iterations, you can reduce the loss to $0.0$. The second way would be "slow learning rate" but more iterations. Suppose you can reduce loss by $1.0$ in each iteration and you need $100$ iteration to have 0.0 loss on your training data. Now think about this: are the two approaches equal? and if not which is better in optimization context and machine learning context ? In optimization literature , the two approaches are the same. As they both converge to optimal solution . On the other hand, in machine learning , they are not equal. Because in most cases we do not make the loss in training set to $0$ which will cause over-fitting. We can think about the first approach as a "coarse level grid search", and second approach as a "fine level grid search". Second approach usually works better, but needs more computational power for more iterations. To prevent over-fitting, we can do different things, the first way would be restrict number of iterations, suppose we are using the first approach, we limit number of iterations to be 5. At the end, the loss for training data is $50$. (BTW, this would be very strange from the optimization point of view, which means we can future improve our solution / it is not converged, but we chose not to. In optimization, usually we explicitly add constraints or penalization terms to objective function, but usually not limit number of iterations.) On the other hand, we can also use second approach: if we set learning rate to be small say reduce $0.1$ loss for each iteration, although we have large number of iterations say $500$ iterations, we still have not minimized the loss to $0.0$. This is why small learning rate is sort of equal to "more regularizations". Here is an example of using different learning rate on an experimental data using xgboost . Please check follwoing two links to see what does eta or n_iterations mean. Parameters for Tree Booster XGBoost Control overfitting For the same number of iterations, say $50$. A small learning rate is "under-fitting" (or the model has "high bias"), and a large learning rate is "over-fitting" (or the model has "high variance"). PS. the evidence of under-fitting is both training and testing set have large error, and the error curve for training and testing are close to each other. The sign of over-fitting is training set's error is very low and testing set is very high, two curves are far away from each other.
{ "source": [ "https://stats.stackexchange.com/questions/168681", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/64776/" ] }
169,109
I don't understand why the "negative binomial" random variable has that name. What is negative about it? What is binomial about it? What is negative-binomial about it?
It's a reference to the fact that a certain binomial coefficient that appears in the formula for that distribution can be written more simply with negative numbers. When you conduct a series of experiment with success probability $p$, the likelihood that you will see $r$ failures after exactly $k$ trials is ${k+r−1}\choose {k}$ $p^k(1−p)^r$. This can also be written as $(−1)^k$${−r}\choose {k}$$p^k(1−p)^r$ and the word "negative" refers to that $−r$ in that binomial coefficient. Observe how this formula looks just like the formula for the ordinary binomial distribution except for that sign coefficient. Another name for the negative binomial distribution is Pascal's distribution so there is that too. ========================================================================= More detailed answer according to Wikipedia: The probability mass function of the negative binomial distribution is $f(k; r, p) \equiv \Pr(X = k) = \binom{k+r-1}{k} p^k(1-p)^r \quad\text{for }k = 0, 1, 2, \dotsc $ Here the quantity in parentheses is the binomial coefficient, and is equal to $\binom{k+r-1}{k} = \frac{(k+r-1)!}{k!\,(r-1)!} = \frac{(k+r-1)(k+r-2)\dotsm(r)}{k!}$. This quantity can alternatively be written in the following manner, explaining the name “negative binomial”: $\frac{(k+r-1)\dotsm(r)}{k!} = (-1)^k \frac{(-r)(-r-1)(-r-2)\dotsm(-r-k+1)}{k!} = (-1)^k\binom{-r}{k}$.
{ "source": [ "https://stats.stackexchange.com/questions/169109", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81598/" ] }
169,141
While formally deriving the confidence interval of an estimate, I ended up with a formula that resembles very closely the way $p$-value is computed. Thus the question: are they formally equivalent? I.e. is rejecting an hypotheses $H_0 = 0$ with a critical value $\alpha$ equivalent to $0$ not belonging to the confidence interval with critical value $\alpha$?
Yes and no. First the "yes" What you've observed is that when a test and a confidence interval is based on the same statistic, there is an equivalence between them: we can interpret the $p$-value as the smallest value of $\alpha$ for which the null value of the parameter would be included in the $1-\alpha$ confidence interval. Let $\theta$ be an unknown parameter in the parameter space $\Theta\subseteq\mathbb{R}$, and let the sample $\mathbf{x}=(x_1,\ldots,x_n)\in\mathcal{X}^ n\subseteq\mathbb{R}^n$ be a realization of the random variable $\mathbf{X}=(X_1,\ldots,X_n)$. For simplicity, define a confidence interval $I_\alpha(\mathbf{X})$ as a random interval such that its coverage probability $$ P_\theta(\theta\in I_\alpha(\mathbf{X}))= 1-\alpha\qquad\mbox{for all }\alpha\in(0,1). $$ (You could similarly consider more general intervals, where the coverage probability either is bounded by or approximately equal to $1-\alpha$. The reasoning is analogous.) Consider a two-sided test of the point-null hypothesis $H_0(\theta_0): \theta=\theta_0$ against the alternative $H_1(\theta_0): \theta\neq \theta_0$. Let $\lambda(\theta_0,\mathbf{x})$ denote the p-value of the test. For any $\alpha\in(0,1)$, $H_0(\theta_0)$ is rejected at the level $\alpha$ if $\lambda(\theta_0,x)\leq\alpha$. The level $\alpha$ rejection region is the set of $\mathbf{x}$ which lead to the rejection of $H_0(\theta_0)$: $$ R_\alpha(\theta_0)=\{\mathbf{x}\in\mathbb{R}^n: \lambda(\theta_0,\mathbf{x})\leq\alpha\}.$$ Now, consider a family of two-sided tests with p-values $\lambda(\theta,\mathbf{x})$, for $\theta\in\Theta$. For such a family we can define an inverted rejection region $$ Q_\alpha(\mathbf{x})=\{\theta\in\Theta: \lambda(\theta,\mathbf{x})\leq\alpha\}.$$ For any fixed $\theta_0$, $H_0(\theta_0)$ is rejected if $\mathbf{x}\in R_\alpha(\theta_0)$, which happens if and only if $\theta_0\in Q_\alpha(\mathbf{x})$, that is, $$ \mathbf{x}\in R_\alpha(\theta_0) \Leftrightarrow \theta_0\in Q_\alpha(\mathbf{x}). $$ If the test is based on a test statistic with a completely specified absolutely continuous null distribution, then $\lambda(\theta_0,\mathbf{X})\sim \mbox{U}(0,1)$ under $H_0(\theta_0)$. Then $$ P_{\theta_0}(\mathbf{X}\in R_\alpha(\theta_0))=P_{\theta_0}(\lambda(\theta_0,\mathbf{X})\leq\alpha)=\alpha. $$ Since this equation holds for any $\theta_0\in\Theta$ and since the equation above it implies that $$P_{\theta_0}(\mathbf{X}\in R_\alpha(\theta_0))=P_{\theta_0}(\theta_0\in Q_\alpha(\mathbf{X})),$$ it follows that the random set $Q_\alpha(\mathbf{x})$ always covers the true parameter $\theta_0$ with probability $\alpha$. Consequently, letting $Q_\alpha^C(\mathbf{x})$ denote the complement of $Q_\alpha(\mathbf{x})$, for all $\theta_0\in\Theta$ we have $$P_{\theta_0}(\theta_0\in Q_\alpha^C(\mathbf{X}))=1-\alpha,$$ meaning that the complement of the inverted rejection region is a $1-\alpha$ confidence interval for $\theta$. An illustration is given below, showing rejection regions and confidence intervals corresponding to the the $z$-test for a normal mean, for different null means $\theta$ and different sample means $\bar{x}$, with $\sigma=1$. $H_0(\theta)$ is rejected if $(\bar{x},\theta)$ is in the shaded light grey region. Shown in dark grey is the rejection region $R_{0.05}(-0.9)=(-\infty,-1.52)\cup(-0.281,\infty)$ and the confidence interval $I_{0.05}(1/2)=Q_{0.05}^C(1/2)=(-0.120,1.120)$. (Much of this is taken from my PhD thesis .) Now for the "no" Above I described the standard way of constructing confidence intervals. In this approach, we use some statistic related to the unknown parameter $\theta$ to construct the interval. There are also intervals based on minimization algorithms, which seek to minimize the length of the interval condition on the value of $X$. Usually, such intervals do not correspond to a test. This phenomenon has to do with problems related to such intervals not being nested, meaning that the 94 % interval can be shorter than the 95 % interval. For more on this, see Section 2.5 of this recent paper of mine (to appear in Bernoulli). And a second "no" In some problems, the standard confidence interval is not based on the same statistic as the standard test (as discussed by Michael Fay in this paper ). In those cases, confidence intervals and tests may not give the same results. For instance, $\theta_0=0$ may be rejected by the test even though 0 is included in the confidence interval. This does not contradict the "yes" above, as different statistics are used. And sometimes "yes" is not a good thing As pointed out by f coppens in a comment, sometimes intervals and tests have somewhat conflicting goals. We want short intervals and tests with high power, but the shortest interval does not always correspond to the test with the highest power. For some examples of this, see this paper (multivariate normal distribution), or this (exponential distribution), or Section 4 of my thesis . Bayesians can also say both yes and no Some years ago, I posted a question here about whether a test-interval-equivalence exists also in Bayesian statistics. The short answer is that using standard Bayesian hypothesis testing, the answer is "no". By reformulating the testing problem a little bit, the answer can however be "yes". (My attempts at answering my own question eventually turned into a paper !)
{ "source": [ "https://stats.stackexchange.com/questions/169141", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/12100/" ] }
169,156
I heard many times about curse of dimensionality, but somehow I'm still unable to grasp the idea, it's all foggy. Can anyone explain this in the most intuitive way, as you would explain it to a child, so that I (and the others confused as I am) could understand this ones for good? EDIT: Now, let's say that the child somehow heard about clustering (for example, they know how to cluster their toys :) ). How would the increase of dimensionality make the job of clustering their toys harder? For example, they used to consider only the shape of the toy and the color of the toy (one-color toys), but now need to consider the size and the weight of toys also. Why is it more difficult for the child to find similar toys? EDIT 2 For the sake of discussion I need to clarify that by - "Why is it more difficult for the child to find similar toys" - I also mean why is the notion of distance lost in high-dimensional spaces?
Probably the kid will like to eat cookies, so let us assume that you have a whole truck with cookies having a different colour, a different shape, a different taste, a different price ... If the kid has to choose but only take into account one characteristic e.g. the taste, then it has four possibilities: sweet, salt, sour, bitter, so the kid only has to try four cookies to find what (s)he likes most. If the kid likes combinations of taste and colour, and there are 4 (I am rather optimistic here :-) ) different colours, then he already has to choose among 4x4 different types; If he wants, in addition, to take into account the shape of the cookies and there are 5 different shapes then he will have to try 4x4x5=80 cookies We could go on, but after eating all these cookies he might already have belly-ache ... before he can make his best choice :-) Apart from the belly-ache, it can get really difficult to remember the differences in the taste of each cookie. As you can see (@Almo) most (all?) things become more complicated as the number of dimensions increases, this holds for adults, for computers and also for kids.
{ "source": [ "https://stats.stackexchange.com/questions/169156", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54659/" ] }
169,438
As we all know, there are 2 methods to evaluate the logistic regression model and they are testing very different things Predictive power: Get a statistic that measures how well you can predict the dependent variable based on the independent variables. The well-known Pseudo R^2 are McFadden (1974) and Cox and Snell (1989). Goodness-of-fit statistics The test is telling whether you could do even better by making the model more complicated, which is actually testing whether there are any non-linearities or interactions that you have missed. I implemented both tests on my model, which added quadratic and interaction already: >summary(spec_q2) Call: glm(formula = result ~ Top + Right + Left + Bottom + I(Top^2) + I(Left^2) + I(Bottom^2) + Top:Right + Top:Bottom + Right:Left, family = binomial()) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.955431 8.838584 0.108 0.9139 Top 0.311891 0.189793 1.643 0.1003 Right -1.015460 0.502736 -2.020 0.0434 * Left -0.962143 0.431534 -2.230 0.0258 * Bottom 0.198631 0.157242 1.263 0.2065 I(Top^2) -0.003213 0.002114 -1.520 0.1285 I(Left^2) -0.054258 0.008768 -6.188 6.09e-10 *** I(Bottom^2) 0.003725 0.001782 2.091 0.0366 * Top:Right 0.012290 0.007540 1.630 0.1031 Top:Bottom 0.004536 0.002880 1.575 0.1153 Right:Left -0.044283 0.015983 -2.771 0.0056 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 3350.3 on 2799 degrees of freedom Residual deviance: 1984.6 on 2789 degrees of freedom AIC: 2006.6 and the predicted power is as below, the MaFadden is 0.4004, and the value between 0.2~0.4 should be taken to present very good fit of the model(Louviere et al (2000), Domenich and McFadden (1975)) : > PseudoR2(spec_q2) McFadden Adj.McFadden Cox.Snell Nagelkerke McKelvey.Zavoina Effron Count Adj.Count 0.4076315 0.4004680 0.3859918 0.5531859 0.6144487 0.4616466 0.8489286 0.4712500 AIC Corrected.AIC 2006.6179010 2006.7125925 and the goodness-of-fit statistics: > hoslem.test(result,phat,g=8) Hosmer and Lemeshow goodness of fit (GOF) test data: result, phat X-squared = 2800, df = 6, p-value < 2.2e-16 As my understanding, GOF is actually testing the following null and alternative hypothesis: H0: The models does not need interaction and non-linearity H1: The models needs interaction and non-linearity Since my models added interaction, non-linearity already and the p-value shows H0 should be rejected, so I came to the conclusion that my model needs interaction, non-linearity indeed. Hope my interpretation is correct and thanks for any advise in advance, thanks.
There are several issues to address. $R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with the $R^2$ from a richer model The Hosmer-Lemeshow test is for overall calibration error, not for any particular lack of fit such as quadratic effects. It does not properly take overfitting into account, is arbitrary to choice of bins and method of computing quantiles, and often has power that is too low. For these reasons the Hosmer-Lemeshow test is no longer recommended. Hosmer et al have a better one d.f. omnibus test of fit, implemented in the R rms package residuals.lrm function. For your case goodness of fit can be assessed by jointly testing (in a "chunk" test) the contribution of all the square and interaction terms. But I recommend specifying the model to make it more likely to fit up front (especially with regard to relaxing linearity assumptions using regression splines) and using the bootstrap to estimate overfitting and to get an overfitting-corrected high-resolution smooth calibration curve to check absolute accuracy. These are done using the R rms package. On the last point, I prefer the philosophy that models be flexible (as limited by the sample size, anyway) and that we concentrate more on "fit" than "lack of fit".
{ "source": [ "https://stats.stackexchange.com/questions/169438", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/83782/" ] }
169,623
In Bayesian inference a predictive distribution for future data is derived by integrating out unknown parameters; integrating over the posterior distribution of those parameters gives a posterior predictive distribution—a distribution for future data conditional on those already observed. What non-Bayesian methods for predictive inference are there that take into account uncertainty in parameter estimates (i.e. that don't merely plug maximum-likelihood estimates or whatever back into a density function) ? Everyone knows how to calculate prediction intervals after a linear regression, but what are the principles behind the calculation & how can they be applied in other situations (e.g. calculating an exact prediction interval for a new exponential variate after estimating the rate parameter from data)?
Non-Bayesian predictive inference (apart from the SLR case) is a relatively recent field. Under the heading of "non-Bayesian" we can subdivide the approaches into those that are "classical" frequentist vs those that are "likelihood" based. Classical Frequentist Prediction As you know, the "gold standard" in frequentism is to achieve the nominal coverage under repeated sampling. For example, we want a 95% Confidence Region to contain the true parameter(s) in 95% of samples from the same underlying population. Or, we expect to commit Type I and II errors in a hypothesis test on average equal to $\alpha$ and $\beta$. Finally, and most germane to this question, we expect our 95% Prediction Interval to contain the next sample point 95% of the time. Now, I've generally had issues with how classical PI's are presented and taught in most stats courses, because the overwhelming tendency is to interpret these as Bayesian posterior predictive intervals, which they are decidedly not. Most fundamentally, they are talking about different probabilities! Bayesian's make no claim on the repeated sampling performance of their quantities (otherwise, they'd be frequentists). Second, a Bayesian PI is actually accomplishing something more similar in spirit to a Classical Tolerance Interval than to a Classical Prediction Interval. For reference: Tolerance Intervals need to be specified by two probabilities: The confidence and the coverage. The confidence tells us how often it is correct in repeated samples. The coverage tells us the minimum probability measure of the interval under the true distribution (as opposed to the PI, which gives the expected probability measure...again under repeated sampling). This is basically what the Bayesian PI is trying to do as well, but without any repeated-sampling claims. So, the basic logic of the Stats 101 Simple Linear Regression is to derive the repeated sampling properties of the PI under the assumption of normality. Its the frequentist+Gaussian approach that is typically thought of as "classical" and taught in intro stats classes. This is based on the simplicity of the resulting calculations (see Wikipedia for a nice overview). Non-gaussian probability distributions are generally problematic because they can lack pivotal quantities that can be neatly inverted to get an interval. Therefore, there is no "exact" method for these distributions, often because the interval's properties depend on the true underlying parameters. Acknowledging this inability, another class of prediction arose (and of inference and estimation) with the likelihood approach. Likelihood-based Inference Likelihood-based approaches, like many modern statistical concepts, can be traced back to Ronald Fisher. The basic idea of this school is that, except for special cases, our statistical inferences are on logically weaker ground than when we are dealing with inferences from a normal distribution (whose parameter estimates are orthogonal ), where we can make exact probability statements. In this view of inference, one should really avoid statements about probability except in the exact case, otherwise, one should make statements about the likelihood and acknowledge that one does not know the exact probability of error (in a frequentist sense). Therefore, we can see likelihood as akin to Bayesian probability, but without the integrability requirements or the possible confusion with frequentist probability. Its interpretation is entirely subjective...although a likelihood ratio of 0.15 is often recommended for single parameter inference. However, one does not often see papers that explicitly give "likelihood intervals". Why? It appears that this is largely a matter of sociology, as we have all grown accustomed to probability-based confidence statements. Instead, what you often see is an author referring to an "approximate" or "asymptotic" confidence interval of such and such. These intervals are largely derived from likelihood methods, where we are relying on the asymptotic Chi-squared distribution of the likelihood ratio in much the same way we rely on the asymptotic normality of the sample mean. With this "fix" we can now construct "approximate" 95% Confidence Regions with almost as much logical consistency as the Bayesians. From CI to PI in the Likelihood Framework The success and ease of the above likelihood approach led to ideas about how to extend it to prediction. A very nice survey article on this is given here (I will not reproduce its excellent coverage). It can be traced back to David Hinkley in the late 1970's (see JSTOR ), who coined the term. He applied it to the perennial " Pearson's Binomial Prediction Problem ". I'll summarize the basic logic. The fundamental insight is that if we include an un observed data point, say $y$, in our sample, and then perform traditional likelihood inference on $y$ instead of a fixed parameter, then what we get is not just a likelihood function, but a distribution (unnormalized), since the "parameter" $y$ is actually random and therefore can be logically assigned a frequentist probability. The mechanics of this for this particular problem are reviewed in the links I provided. The basic rules for getting rid of "nuisance" parameters to get a predictive likelihood are as follows: If a parameter is fixed (e.g., $\mu, \sigma$), then profile it out of the likelihood. If a parameter is random (e.g., other unobserved data or "random effects"), then you integrate them out (just like in Bayesian approach). The distinction between a fixed and random parameter is unique to likelihood inference, but has connections to mixed effects models, where it seems that the Bayesian, frequentist, and likelihood frameworks collide. Hopefully this answered your question about the broad area of "non-Bayesian" prediction (and inference for that matter). Since hyperlinks can change, I'll also make a plug for the book "In All Likelihood: Statistical Modeling and Inference using Likelihood" which discusses the modern likelihood framework at depth, including a fair amount of the epistemological issues of likelihood vs Bayesian vs frequentist inference and prediction. References Prediction Intervals: Non-parametric methods . Wikipedia. Accessed 9/13/2015. Bjornstad, Jan F. Predictive Likelihood: A Review. Statist. Sci. 5 (1990), no. 2, 242--254. doi:10.1214/ss/1177012175. http://projecteuclid.org/euclid.ss/1177012175 . David Hinkley. Predictive Likelihood . The Annals of Statistics Vol. 7, No. 4 (Jul., 1979) , pp. 718-728 Published by: Institute of Mathematical Statistics Stable URL: http://www.jstor.org/stable/2958920 Yudi Pawitan. In All Likelihood: Statistical Modeling and Inference Using Likelihood. Oxford University Press; 1 edition (August 30, 2001). ISBN-10: 0198507658, ISBN-13: 978-0198507659. Especially Chapters 5.5-5.9, 10, and 16.
{ "source": [ "https://stats.stackexchange.com/questions/169623", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17230/" ] }
169,628
I performed Cox regression analysis for two Biomarkers. Both were significant (p<0.05). Biomarker 1 showed higher significance but lower Exp(B)=hazard ratio value than Biomarker 2 (see below). Biomarker 1 : HR=3.06; 95%CI=1.71-5.48; p<0.001 (sorry, SPSS doesn't show more than 3 decimals) Biomarker 2 : HR=6.05; 95%CI=1.67-21.86; p=0.006 Which of the two Biomarkers is a better (stronger) predictor? In other words: what counts more - the P-value or the Exp(B) value?
Non-Bayesian predictive inference (apart from the SLR case) is a relatively recent field. Under the heading of "non-Bayesian" we can subdivide the approaches into those that are "classical" frequentist vs those that are "likelihood" based. Classical Frequentist Prediction As you know, the "gold standard" in frequentism is to achieve the nominal coverage under repeated sampling. For example, we want a 95% Confidence Region to contain the true parameter(s) in 95% of samples from the same underlying population. Or, we expect to commit Type I and II errors in a hypothesis test on average equal to $\alpha$ and $\beta$. Finally, and most germane to this question, we expect our 95% Prediction Interval to contain the next sample point 95% of the time. Now, I've generally had issues with how classical PI's are presented and taught in most stats courses, because the overwhelming tendency is to interpret these as Bayesian posterior predictive intervals, which they are decidedly not. Most fundamentally, they are talking about different probabilities! Bayesian's make no claim on the repeated sampling performance of their quantities (otherwise, they'd be frequentists). Second, a Bayesian PI is actually accomplishing something more similar in spirit to a Classical Tolerance Interval than to a Classical Prediction Interval. For reference: Tolerance Intervals need to be specified by two probabilities: The confidence and the coverage. The confidence tells us how often it is correct in repeated samples. The coverage tells us the minimum probability measure of the interval under the true distribution (as opposed to the PI, which gives the expected probability measure...again under repeated sampling). This is basically what the Bayesian PI is trying to do as well, but without any repeated-sampling claims. So, the basic logic of the Stats 101 Simple Linear Regression is to derive the repeated sampling properties of the PI under the assumption of normality. Its the frequentist+Gaussian approach that is typically thought of as "classical" and taught in intro stats classes. This is based on the simplicity of the resulting calculations (see Wikipedia for a nice overview). Non-gaussian probability distributions are generally problematic because they can lack pivotal quantities that can be neatly inverted to get an interval. Therefore, there is no "exact" method for these distributions, often because the interval's properties depend on the true underlying parameters. Acknowledging this inability, another class of prediction arose (and of inference and estimation) with the likelihood approach. Likelihood-based Inference Likelihood-based approaches, like many modern statistical concepts, can be traced back to Ronald Fisher. The basic idea of this school is that, except for special cases, our statistical inferences are on logically weaker ground than when we are dealing with inferences from a normal distribution (whose parameter estimates are orthogonal ), where we can make exact probability statements. In this view of inference, one should really avoid statements about probability except in the exact case, otherwise, one should make statements about the likelihood and acknowledge that one does not know the exact probability of error (in a frequentist sense). Therefore, we can see likelihood as akin to Bayesian probability, but without the integrability requirements or the possible confusion with frequentist probability. Its interpretation is entirely subjective...although a likelihood ratio of 0.15 is often recommended for single parameter inference. However, one does not often see papers that explicitly give "likelihood intervals". Why? It appears that this is largely a matter of sociology, as we have all grown accustomed to probability-based confidence statements. Instead, what you often see is an author referring to an "approximate" or "asymptotic" confidence interval of such and such. These intervals are largely derived from likelihood methods, where we are relying on the asymptotic Chi-squared distribution of the likelihood ratio in much the same way we rely on the asymptotic normality of the sample mean. With this "fix" we can now construct "approximate" 95% Confidence Regions with almost as much logical consistency as the Bayesians. From CI to PI in the Likelihood Framework The success and ease of the above likelihood approach led to ideas about how to extend it to prediction. A very nice survey article on this is given here (I will not reproduce its excellent coverage). It can be traced back to David Hinkley in the late 1970's (see JSTOR ), who coined the term. He applied it to the perennial " Pearson's Binomial Prediction Problem ". I'll summarize the basic logic. The fundamental insight is that if we include an un observed data point, say $y$, in our sample, and then perform traditional likelihood inference on $y$ instead of a fixed parameter, then what we get is not just a likelihood function, but a distribution (unnormalized), since the "parameter" $y$ is actually random and therefore can be logically assigned a frequentist probability. The mechanics of this for this particular problem are reviewed in the links I provided. The basic rules for getting rid of "nuisance" parameters to get a predictive likelihood are as follows: If a parameter is fixed (e.g., $\mu, \sigma$), then profile it out of the likelihood. If a parameter is random (e.g., other unobserved data or "random effects"), then you integrate them out (just like in Bayesian approach). The distinction between a fixed and random parameter is unique to likelihood inference, but has connections to mixed effects models, where it seems that the Bayesian, frequentist, and likelihood frameworks collide. Hopefully this answered your question about the broad area of "non-Bayesian" prediction (and inference for that matter). Since hyperlinks can change, I'll also make a plug for the book "In All Likelihood: Statistical Modeling and Inference using Likelihood" which discusses the modern likelihood framework at depth, including a fair amount of the epistemological issues of likelihood vs Bayesian vs frequentist inference and prediction. References Prediction Intervals: Non-parametric methods . Wikipedia. Accessed 9/13/2015. Bjornstad, Jan F. Predictive Likelihood: A Review. Statist. Sci. 5 (1990), no. 2, 242--254. doi:10.1214/ss/1177012175. http://projecteuclid.org/euclid.ss/1177012175 . David Hinkley. Predictive Likelihood . The Annals of Statistics Vol. 7, No. 4 (Jul., 1979) , pp. 718-728 Published by: Institute of Mathematical Statistics Stable URL: http://www.jstor.org/stable/2958920 Yudi Pawitan. In All Likelihood: Statistical Modeling and Inference Using Likelihood. Oxford University Press; 1 edition (August 30, 2001). ISBN-10: 0198507658, ISBN-13: 978-0198507659. Especially Chapters 5.5-5.9, 10, and 16.
{ "source": [ "https://stats.stackexchange.com/questions/169628", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86880/" ] }
169,644
I've run an experiment in which different subjects had to make a number of decisions, which are stored in the dependent boolean variable Y (0 or 1). I have multiple independed variables which may influence the outcome, namely an ordinal variable containing integers running from -4 to 4 (o1), two nominal variables which may have three values (n1 and n2) and the ID number of the test subject (ID), which is of course nominal. Now the obvious question is how Y depends on o1, n1 and n2. I figured that because Y is a boolean, you want to make a logistic regression model to calculate the chance of the outcome being 1. Because both nominal and ordinal independend variables are used you would use multinomial logistic regression. (Right?) Someone in my lab suggested to use a mixed effects model, because the intercept might vary per subject. He had me look up linear mixed effects. But would it not make more sense to use a logistic mixed effects? Then, my next question is: how can I make this model using software? I plan to use either SPSS or Matlab/octave. I apologise if this is a duplicate question, but an internet search did not answer my question.
Non-Bayesian predictive inference (apart from the SLR case) is a relatively recent field. Under the heading of "non-Bayesian" we can subdivide the approaches into those that are "classical" frequentist vs those that are "likelihood" based. Classical Frequentist Prediction As you know, the "gold standard" in frequentism is to achieve the nominal coverage under repeated sampling. For example, we want a 95% Confidence Region to contain the true parameter(s) in 95% of samples from the same underlying population. Or, we expect to commit Type I and II errors in a hypothesis test on average equal to $\alpha$ and $\beta$. Finally, and most germane to this question, we expect our 95% Prediction Interval to contain the next sample point 95% of the time. Now, I've generally had issues with how classical PI's are presented and taught in most stats courses, because the overwhelming tendency is to interpret these as Bayesian posterior predictive intervals, which they are decidedly not. Most fundamentally, they are talking about different probabilities! Bayesian's make no claim on the repeated sampling performance of their quantities (otherwise, they'd be frequentists). Second, a Bayesian PI is actually accomplishing something more similar in spirit to a Classical Tolerance Interval than to a Classical Prediction Interval. For reference: Tolerance Intervals need to be specified by two probabilities: The confidence and the coverage. The confidence tells us how often it is correct in repeated samples. The coverage tells us the minimum probability measure of the interval under the true distribution (as opposed to the PI, which gives the expected probability measure...again under repeated sampling). This is basically what the Bayesian PI is trying to do as well, but without any repeated-sampling claims. So, the basic logic of the Stats 101 Simple Linear Regression is to derive the repeated sampling properties of the PI under the assumption of normality. Its the frequentist+Gaussian approach that is typically thought of as "classical" and taught in intro stats classes. This is based on the simplicity of the resulting calculations (see Wikipedia for a nice overview). Non-gaussian probability distributions are generally problematic because they can lack pivotal quantities that can be neatly inverted to get an interval. Therefore, there is no "exact" method for these distributions, often because the interval's properties depend on the true underlying parameters. Acknowledging this inability, another class of prediction arose (and of inference and estimation) with the likelihood approach. Likelihood-based Inference Likelihood-based approaches, like many modern statistical concepts, can be traced back to Ronald Fisher. The basic idea of this school is that, except for special cases, our statistical inferences are on logically weaker ground than when we are dealing with inferences from a normal distribution (whose parameter estimates are orthogonal ), where we can make exact probability statements. In this view of inference, one should really avoid statements about probability except in the exact case, otherwise, one should make statements about the likelihood and acknowledge that one does not know the exact probability of error (in a frequentist sense). Therefore, we can see likelihood as akin to Bayesian probability, but without the integrability requirements or the possible confusion with frequentist probability. Its interpretation is entirely subjective...although a likelihood ratio of 0.15 is often recommended for single parameter inference. However, one does not often see papers that explicitly give "likelihood intervals". Why? It appears that this is largely a matter of sociology, as we have all grown accustomed to probability-based confidence statements. Instead, what you often see is an author referring to an "approximate" or "asymptotic" confidence interval of such and such. These intervals are largely derived from likelihood methods, where we are relying on the asymptotic Chi-squared distribution of the likelihood ratio in much the same way we rely on the asymptotic normality of the sample mean. With this "fix" we can now construct "approximate" 95% Confidence Regions with almost as much logical consistency as the Bayesians. From CI to PI in the Likelihood Framework The success and ease of the above likelihood approach led to ideas about how to extend it to prediction. A very nice survey article on this is given here (I will not reproduce its excellent coverage). It can be traced back to David Hinkley in the late 1970's (see JSTOR ), who coined the term. He applied it to the perennial " Pearson's Binomial Prediction Problem ". I'll summarize the basic logic. The fundamental insight is that if we include an un observed data point, say $y$, in our sample, and then perform traditional likelihood inference on $y$ instead of a fixed parameter, then what we get is not just a likelihood function, but a distribution (unnormalized), since the "parameter" $y$ is actually random and therefore can be logically assigned a frequentist probability. The mechanics of this for this particular problem are reviewed in the links I provided. The basic rules for getting rid of "nuisance" parameters to get a predictive likelihood are as follows: If a parameter is fixed (e.g., $\mu, \sigma$), then profile it out of the likelihood. If a parameter is random (e.g., other unobserved data or "random effects"), then you integrate them out (just like in Bayesian approach). The distinction between a fixed and random parameter is unique to likelihood inference, but has connections to mixed effects models, where it seems that the Bayesian, frequentist, and likelihood frameworks collide. Hopefully this answered your question about the broad area of "non-Bayesian" prediction (and inference for that matter). Since hyperlinks can change, I'll also make a plug for the book "In All Likelihood: Statistical Modeling and Inference using Likelihood" which discusses the modern likelihood framework at depth, including a fair amount of the epistemological issues of likelihood vs Bayesian vs frequentist inference and prediction. References Prediction Intervals: Non-parametric methods . Wikipedia. Accessed 9/13/2015. Bjornstad, Jan F. Predictive Likelihood: A Review. Statist. Sci. 5 (1990), no. 2, 242--254. doi:10.1214/ss/1177012175. http://projecteuclid.org/euclid.ss/1177012175 . David Hinkley. Predictive Likelihood . The Annals of Statistics Vol. 7, No. 4 (Jul., 1979) , pp. 718-728 Published by: Institute of Mathematical Statistics Stable URL: http://www.jstor.org/stable/2958920 Yudi Pawitan. In All Likelihood: Statistical Modeling and Inference Using Likelihood. Oxford University Press; 1 edition (August 30, 2001). ISBN-10: 0198507658, ISBN-13: 978-0198507659. Especially Chapters 5.5-5.9, 10, and 16.
{ "source": [ "https://stats.stackexchange.com/questions/169644", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/87347/" ] }
169,664
Consider the standard model for multiple regression $$Y=X\beta+\varepsilon$$ where $\varepsilon \sim \mathcal N(0, \sigma^2I_n)$, so normality, homoscedasticity and uncorrelatedness of errors all hold. Suppose that we perform a ridge regression, by adding the same small amount to all the elements of the diagonal of $X$: $$\beta_\mathrm{ridge}=[X'X+kI]^{-1}X'Y$$ There are some values of $k$ for which the ridge coefficient has less mean squared error than those obtained by OLS, although $\beta_\mathrm{ridge}$ is a biased estimator of $\beta$. In practice, $k$ is obtained by cross-validation. Here is my question: what are the assumptions underlying the ridge model? To be more concrete, Are all the assumptions of ordinary least square (OLS) valid with ridge regression? If yes to question 1, how do we test homoscedasticity and lack of autocorrelation with a biased estimator of $\beta$? Is there any work on testing other OLS assumptions (homoscedasticity and lack of autocorrelation) under ridge regression?
What is an assumption of a statistical procedure? I am not a statistician and so this might be wrong, but I think the word "assumption" is often used quite informally and can refer to various things. To me, an "assumption" is, strictly speaking, something that only a theoretical result (theorem) can have. When people talk about assumptions of linear regression ( see here for an in-depth discussion), they are usually referring to the Gauss-Markov theorem that says that under assumptions of uncorrelated, equal-variance, zero-mean errors, OLS estimate is BLUE, i.e. is unbiased and has minimum variance. Outside of the context of Gauss-Markov theorem, it is not clear to me what a "regression assumption" would even mean. Similarly, assumptions of a, say, one-sample t-test refer to the assumptions under which $t$ -statistic is $t$ -distributed and hence the inference is valid. It is not called a "theorem", but it is a clear mathematical result: if $n$ samples are normally distributed, then $t$ -statistic will follow Student's $t$ -distribution with $n-1$ degrees of freedom. Assumptions of penalized regression techniques Consider now any regularized regression technique: ridge regression, lasso, elastic net, principal components regression, partial least squares regression, etc. etc. The whole point of these methods is to make a biased estimate of regression parameters, and hoping to reduce the expected loss by exploiting the bias-variance trade-off. All of these methods include one or several regularization parameters and none of them has a definite rule for selecting the values of these parameter. The optimal value is usually found via some sort of cross-validation procedure, but there are various methods of cross-validation and they can yield somewhat different results. Moreover, it is not uncommon to invoke some additional rules of thumb in addition to cross-validation. As a result, the actual outcome $\hat \beta$ of any of these penalized regression methods is not actually fully defined by the method, but can depend on the analyst's choices. It is therefore not clear to me how there can be any theoretical optimality statement about $\hat \beta$ , and so I am not sure that talking about "assumptions" (presence or absence thereof) of penalized methods such as ridge regression makes sense at all. But what about the mathematical result that ridge regression always beats OLS? Hoerl & Kennard (1970) in Ridge Regression: Biased Estimation for Nonorthogonal Problems proved that there always exists a value of regularization parameter $\lambda$ such that ridge regression estimate of $\beta$ has a strictly smaller expected loss than the OLS estimate. It is a surprising result -- see here for some discussion, but it only proves the existence of such $\lambda$ , which will be dataset-dependent. This result does not actually require any assumptions and is always true, but it would be strange to claim that ridge regression does not have any assumptions. Okay, but how do I know if I can apply ridge regression or not? I would say that even if we cannot talk of assumptions, we can talk about rules of thumb . It is well-known that ridge regression tends to be most useful in case of multiple regression with correlated predictors. It is well-known that it tends to outperform OLS, often by a large margin. It will tend to outperform it even in the case of heteroscedasticity, correlated errors, or whatever else. So the simple rule of thumb says that if you have multicollinear data, ridge regression and cross-validation is a good idea. There are probably other useful rules of thumb and tricks of trade (such as e.g. what to do with gross outliers). But they are not assumptions. Note that for OLS regression one needs some assumptions for $p$ -values to hold. In contrast, it is tricky to obtain $p$ -values in ridge regression. If this is done at all, it is done by bootstrapping or some similar approach and again it would be hard to point at specific assumptions here because there are no mathematical guarantees.
{ "source": [ "https://stats.stackexchange.com/questions/169664", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/85761/" ] }
171,043
I have a class imbalanced data & I want to tune the hyperparameters of the boosted tress using xgboost. Questions Is there an equivalent of gridsearchcv or randomsearchcv for xgboost? If not what is the recommended approach to tune the parameters of xgboost?
Caret package have incorporated xgboost. cv.ctrl <- trainControl(method = "repeatedcv", repeats = 1,number = 3, #summaryFunction = twoClassSummary, classProbs = TRUE, allowParallel=T) xgb.grid <- expand.grid(nrounds = 1000, eta = c(0.01,0.05,0.1), max_depth = c(2,4,6,8,10,14) ) set.seed(45) xgb_tune <-train(formula, data=train, method="xgbTree", trControl=cv.ctrl, tuneGrid=xgb.grid, verbose=T, metric="Kappa", nthread =3 ) Sample output eXtreme Gradient Boosting 32218 samples 41 predictor 2 classes: 'N', 'Y' No pre-processing Resampling: Cross-Validated (3 fold, repeated 1 times) Summary of sample sizes: 21479, 21479, 21478 Resampling results Accuracy Kappa Accuracy SD Kappa SD 0.9324911 0.1094426 0.0009742774 0.008972911 One drawback i see is that other parameters of xgboost like subsample etc are not supported by caret currently. Edit Gamma, colsample_bytree, min_child_weight and subsample etc can now (June 2017) be tuned directly using Caret. Just add them in the grid portion of the above code to make it work. Thanks usεr11852 for highliting it in the comment.
{ "source": [ "https://stats.stackexchange.com/questions/171043", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54214/" ] }
171,324
I've read an article saying that when using planned contrasts to find means that are different in an one way ANOVA, constrasts should be orthogonal so that they are uncorrelated and prevent the type I error from being inflated. I don't understand why orthogonal would mean uncorrelated under any circumstances. I can't find a visual/intuitive explanation of that, so I tried to understand these articles/answers https://www.psych.umn.edu/faculty/waller/classes/FA2010/Readings/rodgers.pdf What does orthogonal mean in the context of statistics? but to me, they contradict each other. The first says that if two variables are uncorrelated and/or orthogonal then they are linearly independent, but that the fact that they are linearly independant does not imply that they are uncorrelated and/or orthogonal. Now on the second link there are answers that state things like "orthogonal means uncorrelated" and "If X and Y are independent then they are Orthogonal. But the converse is not true". Another interesting comment in the second link state that the correlation coefficient between two variables is equal to the cosine of the angle between the two vectors corresponding to these variables, which implies that two orthogonal vectors are completely uncorrelated (which isn't what the first article claims). So what's the true relationship between independence, orthogonal and correlation ? Maybe I missed something but I can't find out what it is.
Independence is a statistical concept. Two random variables $X$ and $Y$ are statistically independent if their joint distribution is the product of the marginal distributions, i.e. $$ f(x, y) = f(x) f(y) $$ if each variable has a density $f$, or more generally $$ F(x, y) = F(x) F(y) $$ where $F$ denotes each random variable's cumulative distribution function. Correlation is a weaker but related statistical concept. The (Pearson) correlation of two random variables is the expectancy of the product of the standardized variables, i.e. $$ \newcommand{\E}{\mathbf E} \rho = \E \left [ \frac{X - \E[X]}{\sqrt{\E[(X - \E[X])^2]}} \frac{Y - \E[Y]}{\sqrt{\E[(Y - \E[Y])^2]}} \right ]. $$ The variables are uncorrelated if $\rho = 0$. It can be shown that two random variables that are independent are necessarily uncorrelated, but not vice versa. Orthogonality is a concept that originated in geometry, and was generalized in linear algebra and related fields of mathematics. In linear algebra, orthogonality of two vectors $u$ and $v$ is defined in inner product spaces , i.e. vector spaces with an inner product $\langle u, v \rangle$, as the condition that $$ \langle u, v \rangle = 0. $$ The inner product can be defined in different ways (resulting in different inner product spaces). If the vectors are given in the form of sequences of numbers, $u = (u_1, u_2, \ldots u_n)$, then a typical choice is the dot product , $\langle u, v \rangle = \sum_{i = 1}^n u_i v_i$. Orthogonality is therefore not a statistical concept per se, and the confusion you observe is likely due to different translations of the linear algebra concept to statistics: a) Formally, a space of random variables can be considered as a vector space. It is then possible to define an inner product in that space, in different ways. One common choice is to define it as the covariance: $$ \langle X, Y \rangle = \mathrm{cov} (X, Y) = \E [ (X - \E[X]) (Y - \E[Y]) ]. $$ Since the correlation of two random variables is zero exactly if the covariance is zero, according to this definition uncorrelatedness is the same as orthogonality. (Another possibility is to define the inner product of random variables simply as the expectancy of the product .) b) Not all the variables we consider in statistics are random variables. Especially in linear regression, we have independent variables which are not considered random but predefined. Independent variables are usually given as sequences of numbers, for which orthogonality is naturally defined by the dot product (see above). We can then investigate the statistical consequences of regression models where the independent variables are or are not orthogonal. In this context, orthogonality does not have a specifically statistical definition, and even more: it does not apply to random variables. Addition responding to Silverfish's comment: Orthogonality is not only relevant with respect to the original regressors but also with respect to contrasts, because (sets of) simple contrasts (specified by contrast vectors) can be seen as transformations of the design matrix, i.e. the set of independent variables, into a new set of independent variables. Orthogonality for contrasts is defined via the dot product. If the original regressors are mutually orthogonal and one applies orthogonal contrasts, the new regressors are mutually orthogonal, too. This ensures that the set of contrasts can be seen as describing a decomposition of variance, e.g. into main effects and interactions, the idea underlying ANOVA . Since according to variant a), uncorrelatedness and orthogonality are just different names for the same thing, in my opinion it is best to avoid using the term in that sense. If we want to talk about uncorrelatedness of random variables, let's just say so and not complicate matters by using another word with a different background and different implications. This also frees up the term orthogonality to be used according to variant b), which is highly useful especially in discussing multiple regression. And the other way around, we should avoid applying the term correlation to independent variables, since they are not random variables. Rodgers et al.'s presentation is largely in line with this view, especially as they understand orthogonality to be distinct from uncorrelatedness. However, they do apply the term correlation to non-random variables (sequences of numbers). This only makes sense statistically with respect to the sample correlation coefficient $r$. I would still recommend to avoid this use of the term, unless the number sequence is considered as a sequence of realizations of a random variable. I've scattered links to the answers to the two related questions throughout the above text, which should help you put them into the context of this answer.
{ "source": [ "https://stats.stackexchange.com/questions/171324", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/88821/" ] }
172,311
Why geom_density is showing me values higher than 1 in the density plot? How to change it into fraction? And my code used to generate the plot ggplot(data = input2, aes(x = r.close)) + geom_density(aes(y = ..density.., fill = `Próba`), alpha = 0.3, stat = "density", position = "identity") + xlab("y") + ylab("density") + theme_bw() + theme(plot.title=element_text(size = rel(1.6), face = "bold"), legend.position = "bottom", legend.background = element_rect(colour = "gray"), legend.key = element_rect(fill = "gray90"), axis.title = element_text(face = "bold", size = 13))
Or you can just used the computed ..scaled.. value stat_density provides: library(ggplot2) set.seed(1) vals1 <- rbeta(1000, 0.5, 0.1) vals2 <- rbeta(1000, 0.25, 0.3) gg <- ggplot(data.frame(x=c(vals1, vals2), grp=c(rep("a", 1000), rep("b", 1000)))) gg <- gg + geom_density(aes(x=x, y=..scaled.., fill=grp), alpha=1/2) gg <- gg + theme_bw() gg
{ "source": [ "https://stats.stackexchange.com/questions/172311", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86596/" ] }
172,508
I am trying to get my head around the difference between sample selection and endogeneity and in turn how Heckman models (to deal with sample selection) differ from instrumental variable regressions (to deal with endogeneity). Is it correct to say that sample selection is a specific form of endogeneity, where the endogenous variable is the likelihood of being treated? Also, it seems to me that both Heckman models and IV regression are 2-stage models, where the first stage predicts the likelihood of being treated - I assume they must differ in terms of what they are empirically doing, their objectives, and assumptions, but how?
To answer your first question, you are correct that sample selection is a specific form of endogeneity (See Antonakis et al. 2010 for a good basic review of endogeneity and common remedies), however you are not correct in saying that the likelihood of being treated is the endogenous variable, as it is the treatment variable itself ("non-random treatment assignment")--rather than the likelihood of being treated--that is endogenous in sample selection. Recall that endogeneity refers to a situation where you have incorrectly identified a causal relationship between factor X and factor Y, when the observed “relationship” is actually due to another factor Z that affects both X and Y. Put another way, given a regression model: $y_i=\beta_0+\beta_1x_i+...+\epsilon_i$ endogeneity occurs when one or more than one of your predictors is related to the error term in the model. That is, when $Cov(x,\epsilon)\ne0$. The common causes of endogenity include: Omitted variables (some things we just can’t measure) Motivation/choice Ability/talent Self-selection Measurement error (we would like to include $x_j$, but we only observe $x_j*$) Simultaneity/bidirectionality (in children under 5, the relationship between the nutritional status indicator “weight for age” and whether the child had a recent illness might be simultaneous. Different types of problems require slightly different solutions, which is where the difference between IV and Heckman-type corrections lie. Of course there are differences in the underlying mechanics of these methods, but the premise is the same: which is to remove endogeneity, ideally via an exclusion restriction, i.e. one or more instruments in the case of IV or a variable that affects selection but not the outcome in the case of Heckman. To answer your second question, you have to think about the differences in the types of data limitations that gave rise to the development of these solutions. I like to think that the instrumental variable (IV) approach is used when one or more variables is endogenous, and there are simply no good proxies to stick in the model to remove the endogeneity, but the covariates and outcomes are observed for all observations. Heckman-type corrections, on the other hand, are used when you have truncation, i.e. the information is not observed for those in sample where the value of the selection variable == 0. The instrumental variable (IV) approach Think of the classic econometric example for IV regression with the two-stage least squares (2SLS) estimator: the effect of education on earnings. $Earnings_i=\beta_0+ \beta_1OwnEd_i + \epsilon_i$ (1) Here level of educational achievement is endogenous because it is determined partly by the individual's motivation and ability, both of which also affect a person's earnings. Motivation and Ability are not typically measured in household or economic surveys. Equation 1 can therefore be written to explicitly include motivation and ability: $Earnings_i=\beta_0+ \{\beta_1OwnEd_i + \beta_2Motiv_i + \beta_3Abil_i\} + \epsilon_i$ (2) Since $Motiv$ and $Abil$ are not actually observed, Equation 2 can be written as: $Earnings_i=\beta_0+ \beta_1OwnEd_i + u_i$ (3), where $u_i=\beta_2Motiv_i + \beta_3Abil_i + \epsilon_i$ (4). Therefore a naïve estimation of the effect of education on earnings via OLS would be biased. This part you already know. In the past, people have used parents' education as instruments for the subject's own level of education, as they fit the 3 requirements for a valid instrument ($z$): $z$ must be related to the endogenous predictor – $(,)≠0$, $z$ cannot be directly related to the outcome – $(,)=0$, and $z$ cannot be related to the unobservable (u) characteristic (that is, $z$ is exogenous) – $(,)=0$ When you estimate the subject's education ($OwnEd$) using parents' education ($MomEd$ and $DadEd$) at first stage and use the predicted value of education ($\widehat{OwnEd}$) to estimate $Earnings$ at second stage, you are (in very simplistic terms), estimating $Earnings$ based on the portion of $OwnEd$ that is not determined by motivation/ability. Heckman-type corrections As we have established before, non-random sample selection is a specific type of endogeneity. In this case, the omitted variable is how people were selected into the sample. Typically, when you have a sample selection problem, your outcome is observed only for those for whom the sample selection variable == 1 . This problem is also known as "incidental truncation," and the solution is commonly known as a Heckman correction. The classic example in econometrics is the wage offer of married women: $Wage_i = \beta_0 + \beta_1Educ_i + \beta_2Experience_i + \beta_3Experience^2_i+\epsilon_i$ (5) The problem here is that $Wage$ is only observed for women who worked for wages, so a naïve estimator would be biased, as we do not know what the wage offer is for those who do not participate in the labor force, the selection variable $s$. Equation 5 can be rewritten to show that it is jointly determined by two latent models: $Wage_i^* = X\beta^\prime+\epsilon_i$ (6) $LaborForce_i^* = Z\gamma^\prime+\nu_i$ (7) That is, $Wage = Wage_i^*$ IFF $LaborForce_i^*>0$ and $Wage = . $ IFF $LaborForce_i^*\leq 0$ The solution here is therefore to predict the likelihood of participation in the labor force at first stage using a probit model and the exclusion restriction (the same criteria for valid instruments apply here), calculate the predicted inverse Mills ratio ($\hat{\lambda}$) for each observation, and in second stage, estimate the wage offer using the $\hat{\lambda}$ as a predictor in the model (Wooldridge 2009). If the coefficient on $\hat{\lambda}$ is statistically equal to zero, there is no evidence of sample selection (endogeneity), and OLS results are consistent and can be presented. If the coefficient on $\hat{\lambda}$ is statistically significantly different from zero, you will need to report the coefficients from the corrected model. References Antonakis, John, Samuel Bendahan, Philippe Jacquart, and Rafael Lalive. 2010. “On Making Causal Claims: A Review and Recommendations.” The Leadership Quarterly 21 (6): 1086–1120. doi:10.1016/j.leaqua.2010.10.010. Wooldridge, Jeffrey M. 2009. Introductory Econometrics: A Modern Approach. 4th ed. Mason, OH, USA: South-Western, Cengage Learning.
{ "source": [ "https://stats.stackexchange.com/questions/172508", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/30317/" ] }
172,900
I'm just learning about optimization, and having trouble understanding the difference between convex and non-convex optimization. From my understanding, a convex function is one where "the line segment between any two points on the graph of the function lies above or on the graph". In this case, a gradient descent algorithm could be used, because there is a single minimum and the gradients will always take you to that minimum. However, what about the function in this figure: Here, the blue line segment crosses below the red function. However, the function still has a single minimum, and so gradient descent would still take you to this minimum. So my questions are: 1) Is the function in this figure convex, or non-convex? 2) If it is non-convex, then can convex optimization methods (gradient descent) still be applied?
The function you have graphed is indeed not convex. However, it is quasiconvex . Gradient descent is a generic method for continuous optimization, so it can be, and is very commonly, applied to nonconvex functions. With a smooth function and a reasonably selected step size, it will generate a sequence of points $x_1, x_2, \ldots$ with strictly decreasing values $f(x_1) > f(x_2) > \ldots$. Gradient descent will eventually converge to a stationary point of the function, regardless of convexity. If the function is convex, this will be a global minimum, but if not, it could be a local minimum or even a saddle point. Quasiconvex functions are an interesting case. Any local minimum of a quasiconvex function is also a global minimum, but quasiconvex functions can also have stationary points that are not local minima (take $f(x) = x^3$ for example). So it's theoretically possible for gradient descent to get stuck on such a stationary point and not progress to a global min. In your example, if the shoulder on the left side of the graph were to perfectly level out, gradient descent could get stuck there. However, variants such as the heavy-ball method might be able to "roll through" and reach the global min.
{ "source": [ "https://stats.stackexchange.com/questions/172900", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/72307/" ] }
173,060
I have been asked on several occasions the question: What is Big-Data? Both by students and my relatives that are picking up the buzz around statistics and ML. I found this CV-post . And I feel that I agree with the only answer there. The Wikipedia page also has some comments on it, but I am not sure if I really agree with everything there. EDIT: (I feel that the Wikipedia page lacks in explaining the methods to tackle this and the paradigm I mention below) . I recently attended a lecture by Emmanuel Candès , where he introduced the Big-Data paradigm as Collect data first $\Rightarrow$ Ask questions later This is the main difference from hypothesis-driven research, where you first formulate a hypothesis and then collect data to say something about it. He went a lot into the issues of quantifying reliability of hypotheses generated by data snooping. The main thing I took out of his lecture was that we really need to start to control the FDR and he presented the knockoff method to do so. I think that CV should have a question on what is Big-Data and what is your definition on it. I feel that there are so many different "definitions" , that it is hard to really grasp what it is, or explain it to others, if there is not a general consensus on what it consists of. I feel that the "definition/paradigm/description" provided by Candès is the closest thing I agree on, what are your thoughts? EDIT2: I feel that the answer should provide something more than just an explanation of the data itself. It should be a combination of data/methods/paradigm. EDIT3: I feel that this interview with Michael Jordan could add something to the table as well. EDIT4: I decided to choose the highest voted answer as the correct one. Although I think that all the answers add something to the discussion and I personally feel that this is more a question of a paradigm of how we generate hypotheses and work with data. I hope this question will serve as a pool of references for those that go looking for what Big-Data is. I hope that the Wikipedia page will be changed to further emphasize the multiple comparison problem and control of FDR.
I had the pleasure of attending a lecture given by Dr. Hadley Wickham, of RStudio fame. He defined it such that Big Data: Can't fit in memory on one computer: > 1 TB Medium Data: Fits in memory on a server: 10 GB - 1 TB Small Data: Fits in memory on a laptop: < 10 GB Hadley also believes that most data can at least be reduced to managable problems, and that a very small amount is actually true big data. He denotes this as the "Big Data Mirage". 90% Can be reduced to a small/ medium data problem with subsetting/sampling/summarising 9% Can be reduced to a very large number of small data problems 1% Is irreducibly big Slides can be found here .
{ "source": [ "https://stats.stackexchange.com/questions/173060", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68057/" ] }
173,390
Gradient tree boosting as proposed by Friedman uses decision trees as base learners. I'm wondering if we should make the base decision tree as complex as possible (fully grown) or simpler? Is there any explanation for the choice? Random Forest is another ensemble method using decision trees as base learners. Based on my understanding, we generally use the almost fully grown decision trees in each iteration. Am I right?
$\text{error = bias + variance}$ Boosting is based on weak learners (high bias, low variance). In terms of decision trees, weak learners are shallow trees, sometimes even as small as decision stumps (trees with two leaves). Boosting reduces error mainly by reducing bias (and also to some extent variance, by aggregating the output from many models). On the other hand, Random Forest uses as you said fully grown decision trees (low bias, high variance). It tackles the error reduction task in the opposite way: by reducing variance. The trees are made uncorrelated to maximize the decrease in variance, but the algorithm cannot reduce bias (which is slightly higher than the bias of an individual tree in the forest). Hence the need for large, unpruned trees, so that the bias is initially as low as possible. Please note that unlike Boosting (which is sequential), RF grows trees in parallel . The term iterative that you used is thus inappropriate.
{ "source": [ "https://stats.stackexchange.com/questions/173390", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45339/" ] }
173,458
This question is a bit left-field, but I figured that the community here probably has strong views on the subject! I am writing up my PhD thesis. Consistently, when talking about quantities which are formally related to a Gaussian distribution, I have capitalised the "N" in "Normal" to refer to them. For example, "[...Under such circumstances] the resulting distribution is not Normal, but rather described by [...]". My supervisor has read through the relevant chapter, and replaced every single one of these with a lower-case 'n'. I can't find any definitive literature on the subject -- Springer apparently wanted names properly capitalised, and according to another random dude on the internet , capitalising distribution names is a Good Idea. Lacking a definitive style guide for my thesis, I thought I'd turn to the community of experts -- what is commonly done, and why?
For what it's worth, Wikipedia says this on the origin of the name: Since its introduction, the normal distribution has been known by many different name... Gauss himself apparently coined the term with reference to the "normal equations" involved in its applications, with normal having its technical meaning of orthogonal rather than "usual". However, by the end of the 19th century some authors had started using the name normal distribution, where the word "normal" was used as an adjective... https://en.wikipedia.org/wiki/Normal_distribution#Naming It is also not capitalized in the Wikipedia article, nor have I seen it capitalized in general as an American English speaker. For all intents and purposes normal IS an adjective, though not one that's meant to imply all other distributions are 'abnormal'.
{ "source": [ "https://stats.stackexchange.com/questions/173458", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36171/" ] }
173,844
This is a question stemming from a real-life situation, for which I have been genuinely puzzled about its answer. My son is due to start primary school in London. As we are Italian, I was curious to know how many Italian children are already attending the school. I asked this to the Admission Officer while applying, and she told me they have on average 2 Italian children per class (of 30). I am now at the point in time where I know that my child has been accepted, but I have no other information about the other children. Admission criteria are based on distance, but for the purpose of this question, I believe we could assume it's based on random allocation from a large sample of applicants. How many Italian children are expected to be in my son's class? Will it be closer to 2 or 3?
As always you need to consider a probabilistic model that describes how the school distributes children among classes. Possibilities: The school takes care that all classes have the same number of foreign nationals. The school even tries to make certain that each nationality is represented roughly the same in every class. The school doesn't consider nationality at all and just distributes randomly or based on other criteria. All of these are reasonable. Given strategy 2 the answer to your question is no. When they use strategy 3, the expectation will be close to 3, but a bit smaller. That is because your son takes up a "slot", and you have one less chance for a random Italian. When the school uses strategy 1 the expectation also goes up; how much depends on the number of foreign nationals per class. Without knowing your school there is no way to answer this more perfectly. If you have just one class per year and the admission criteria are as described the answer would be the same as for 3 above. Calculating for 3 in detail: $$E(X) = 1 + E(B(29, 2/30)) = 1 + 1.9333 = 2.9333.$$ X is the number of Italian children in the class. The 1 comes from the known child, the 29 are the rest of the class and 2/30 is the probability for an unknown kid being Italian given what the school says. B is the binomial distribution. Note that starting with $E(X|X\geq1)$ does not give the proper answer, as knowing that a specific child is Italian violates the exchangeability assumed by the binomial distribution. Compare this with the boy or girl paradox , where it makes a difference whether you know that one child is a girl vs. knowing that the older child is a girl.
{ "source": [ "https://stats.stackexchange.com/questions/173844", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/90213/" ] }
174,295
I was reading the paper ImageNet Classification with Deep Convolutional Neural Networks and in section 3 were they explain the architecture of their Convolutional Neural Network they explain how they preferred using: non-saturating nonlinearity $f(x) = max(0, x). $ because it was faster to train. In that paper they seem to refer to saturating nonlinearities as the more traditional functions used in CNNs, the sigmoid and the hyperbolic tangent functions (i.e. $f(x) = tanh(x)$ and $f(x) = \frac{1}{1 + e^{-x}} = (1 + e^{-x})^{-1}$ as saturating). Why do they refer to these functions as "saturating" or "non-saturating"? In what sense are these function "saturating" or "non-saturating"? What do those terms mean in the context of convolutional neural networks? Are they used in other areas of machine learning (and statistics)?
Intuition A saturating activation function squeezes the input. Definitions $f$ is non-saturating iff $ (|\lim_{z\to-\infty} f(z)| = +\infty) \vee (|\lim_{z\to+\infty} f(z)| = +\infty) $ $f$ is saturating iff $f$ is not non-saturating. These definitions are not specific to convolutional neural networks. Examples The Rectified Linear Unit (ReLU) activation function, which is defined as $f(x)=max(0,x)$ is non-saturating because $\lim_{z\to+\infty} f(z) = +\infty$ : The sigmoid activation function, which is defined as $f(x) = \frac{1}{1 + e^{-x}}$ is saturating, because it squashes real numbers to range between $[0,1]$ : The tanh (hyperbolic tangent) activation function is saturating as it squashes real numbers to range between $[-1,1]$ : (figures are from CS231n , MIT License)
{ "source": [ "https://stats.stackexchange.com/questions/174295", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/28986/" ] }
174,481
In most machine learning tasks where you can formulate some probability $p$ which should be maximised, we would actually optimize the log probability $\log p$ instead of the probability for some parameters $\theta$. E.g. in maximum likelihood training, it's usually the log-likelihood. When doing this with some gradient method, this involves a factor: $$ \frac{\partial \log p}{\partial \theta} = \frac{1}{p} \cdot \frac{\partial p}{\partial \theta} $$ See here or here for some examples. Of course, the optimization is equivalent, but the gradient will be different, so any gradient-based method will behave different (esp. stochastic gradient methods). Is there any justification that the $\log p$ gradient works better than the $p$ gradient?
Gradient methods generally work better optimizing $\log p(x)$ than $p(x)$ because the gradient of $\log p(x)$ is generally more well-scaled . That is, it has a size that consistently and helpfully reflects the objective function's geometry, making it easier to select an appropriate step size and get to the optimum in fewer steps. To see what I mean, compare the gradient optimization process for $p(x) = \exp(-x^2)$ and $f(x) = \log p(x) = -x^2$. At any point $x$, the gradient of $f(x)$ is $$f'(x) = -2x.$$ If we multiply that by $1/2$, we get the exact step size needed to get to the global optimum at the origin, no matter what $x$ is. This means that we don't have to work too hard to get a good step size (or "learning rate" in ML jargon). No matter where our initial point is, we just set our step to half the gradient and we'll be at the origin in one step. And if we don't know the exact factor that is needed, we can just pick a step size around 1, do a bit of line search, and we'll find a great step size very quickly, one that works well no matter where $x$ is. This property is robust to translation and scaling of $f(x)$. While scaling $f(x)$ will cause the optimal step scaling to differ from 1/2, at least the step scaling will be the same no matter what $x$ is, so we only have to find one parameter to get an efficient gradient-based optimization scheme. In contrast, the gradient of $p(x)$ has very poor global properties for optimization. We have $$p'(x) = f'(x) p(x)= -2x \exp(-x^2).$$ This multiplies the perfectly nice, well-behaved gradient $-2x$ with a factor $\exp(-x^2)$ which decays (faster than) exponentially as $x$ increases. At $x = 5$, we already have $\exp(-x^2) = 1.4 \cdot 10^{-11}$, so a step along the gradient vector is about $10^{-11}$ times too small. To get a reasonable step size toward the optimum, we'd have to scale the gradient by the reciprocal of that, an enormous constant $\sim 10^{11}$. Such a badly-scaled gradient is worse than useless for optimization purposes - we'd be better off just attempting a unit step in the uphill direction than setting our step by scaling against $p'(x)$! (In many variables $p'(x)$ becomes a bit more useful since we at least get directional information from the gradient, but the scaling issue remains.) In general there is no guarantee that $\log p(x)$ will have such great gradient scaling properties as this toy example, especially when we have more than one variable. However, for pretty much any nontrivial problem, $\log p(x)$ is going to be way, way better than $p(x)$. This is because the likelihood is a big product with a bunch of terms, and the log turns that product into a sum, as noted in several other answers. Provided the terms in the likelihood are well-behaved from an optimization standpoint, their log is generally well-behaved, and the sum of well-behaved functions is well-behaved. By well-behaved I mean $f''(x)$ doesn't change too much or too rapidly, leading to a nearly quadratic function that is easy to optimize by gradient methods. The sum of a derivative is the derivative of the sum, no matter what the derivative's order, which helps to ensure that that big pile of sum terms has a very reasonable second derivative!
{ "source": [ "https://stats.stackexchange.com/questions/174481", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2244/" ] }
175,246
I read that ANOVA and linear regression are the same thing. How can that be, considering that the output of ANOVA is some $F$ value and some $p$-value based on which you conclude if the sample means across the different samples are same or different. But assuming the means are not equal (reject null hypothesis), ANOVA tells you nothing about the coefficients of the linear model. So how is linear regression the same as ANOVA?
ANOVA and linear regression are equivalent when the two models test against the same hypotheses and use an identical encoding. The models differ in their basic aim: ANOVA is mostly concerned to present differences between categories' means in the data while linear regression is mostly concern to estimate a sample mean response and an associated $\sigma^2$. Somewhat aphoristically one can describe ANOVA as a regression with dummy variables. We can easily see that this is the case in the simple regression with categorical variables. A categorical variable will be encoded as a indicator matrix (a matrix of 0/1 depending on whether a subject is part of a given group or not) and then used directly for the solution of the linear system described by a linear regression. Let's see an example with 5 groups. For the sake of argument I will assume that the mean of group1 equals 1, the mean of group2 equals 2, ... and the mean of group5 equals 5. (I use MATLAB, but the exact same thing is equivalent in R.) rng(123); % Fix the seed X = randi(5,100,1); % Generate 100 random integer U[1,5] Y = X + randn(100,1); % Generate my response sample Xcat = categorical(X); % Treat the integers are categories % One-way ANOVA [anovaPval,anovatab,stats] = anova1(Y,Xcat); % Linear regression fitObj = fitlm(Xcat,Y); % Get the group means from the ANOVA ANOVAgroupMeans = stats.means % ANOVAgroupMeans = % 1.0953 1.8421 2.7350 4.2321 5.0517 % Get the beta coefficients from the linear regression LRbetas = [fitObj.Coefficients.Estimate'] % LRbetas = % 1.0953 0.7468 1.6398 3.1368 3.9565 % Rescale the betas according the intercept scaledLRbetas = [LRbetas(1) LRbetas(1)+LRbetas(2:5)] % scaledLRbetas = % 1.0953 1.8421 2.7350 4.2321 5.0517 % Check if the two results are numerically equivalent abs(max( scaledLRbetas - ANOVAgroupMeans)) % ans = % 2.6645e-15 As it can be seen in this scenario the results where exactly the same. The minute numerical difference is due to the design not being perfectly balanced as well as the underlaying estimation procedure; the ANOVA accumulates numerical errors a bit more aggressively. To that respect we fit an intercept, LRbetas(1) ; we could fit an intercept-free model but that would not be a "standard" linear regression. (The results would be even closer to ANOVA in that case though.) The $F$-statistic (a ratio of the means) in the case of the ANOVA and in the case of linear regression will be also be the same for the above example: abs( fitObj.anova.F(1) - anovatab{2,5} ) % ans = % 2.9132e-13 This is because procedures test the same hypothesis but with different wordings: ANOVA will qualitatively check if " the ratio is high enough to suggest that no grouping is implausible " while linear regression will qualitatively check if " the ratio is high enough to suggest an intercept only model is possibly inadequate ". (This is a somewhat free interpretation of the " possibility to see a value equal or greater than the one observed under the null hypothesis " and it is not meant to be a text-book definition.) Coming back to the final part of your question about " ANOVA tell(ing) you nothing about the coefficients of the linear model (assuming the means are not equal ") I hope you can now see that the ANOVA, in the case that your design is simple/ balanced enough, tells you everything that a linear model would. The confidence intervals for group means will be the same you have for your $\beta$, etc. Clearly when ones starts adding multiple covariate in his regression model, a simple one-way ANOVA does not have a direct equivalence. In that case one augments the information used to calculate the linear regression's mean response with information that are not directly available for a one way ANOVA. I believe that one can re-express things in ANOVA terms once more but it is mostly an academic exercise. An interesting paper on the matter is Gelman's 2005 paper titled: Analysis of Variance - Why it is more important than ever . Some important points raised; I am not fully supportive of the paper (I think I personally align much more with McCullach's view) but it can be a constructive read. As a final note: The plot thickens when you have mixed effects models . There you have different concepts about what can be considered a nuisance or actual information regarding the grouping of your data. These issues are outside the scope of this question but I think they are worthy of a nod.
{ "source": [ "https://stats.stackexchange.com/questions/175246", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/61158/" ] }
175,349
I recently discovered how to model exposures over time using the log of (e.g.) time as an offset in a Poisson regression. I understood that the offset corresponds to having time as covariate with coefficient 1. I'd like to better understand the difference between using time as an offset or as a normal covariate (therefore estimating the coefficient). In which situation should I want to use one method or the other? UPGRADE: I don't know if it is interesting, but I ran a validation of the two methods using randomly split data repeated 500 times and I noticed that using the offset method leads to a larger test error.
Offsets can be used in any regression model, but they are much more common when working with count data for your response variable. An offset is just a variable that is forced to have a coefficient of $1$ in the model. (See also this excellent CV thread: When to use an offset in a Poisson regression? ) When used correctly with count data, this will let you model rates instead of counts . If that is of interest, then it is something to do. Thus, this is the context in which offsets are used most frequently. Let's consider a Poisson GLiM with a log link (which is the canonical link). \begin{align} \ln(\lambda) &= \beta_0 + \beta_1X & ({\rm counts})& \\ \ln\bigg(\frac{\lambda}{{\rm time}}\bigg) &= \beta_0 + \beta_1X & ({\rm rates})& \\ &\Rightarrow \\ \ln(\lambda) - \ln({\rm time}) &= \beta_0 + \beta_1X \\ \ln(\lambda) &= \beta_0 + \beta_1X + 1\times \ln({\rm time}) & ({\rm still\ rates})& \\ &\ne \\ \ln(\lambda) &= \beta_0 + \beta_1X + \beta_2\times \ln({\rm time})\quad {\rm when}\ \beta_2 \ne 1 & ({\rm counts\ again})& \end{align} (As you can see, the key to using an offset correctly is to make $\ln({\rm time})$ the offset, not $\rm time$.) When the coefficient on $\ln({\rm time})$ isn't $1$, you are no longer modeling rates. But since $\beta_2 \in (-\infty, 1)\cup (1, \infty)$ provides much greater flexibility to fit the data, models that don't use $\ln({\rm time})$ as an offset will typically fit better (although they may also overfit). Whether you should model counts or rates really depends on what your substantive question is. You should model the one that corresponds to what you want to know. As far as what it might mean for $\beta_2$ not to be $1$, consider an example where time isn't the variable in question. Imagine studying the number of surgical complications at different hospitals. One hospital has many more reported surgical complications, but they might claim that the comparison isn't fair because they do many more surgeries. So you decide to try to control for this. You can simply use the log of the number of surgeries as an offset, which would let you study the rate of complications per surgery. You could also use the log of the number of surgeries as another covariate. Let's say that the coefficient is significantly different from $1$. If $\beta_2 > 1$, then the hospitals that do more surgeries have a higher rate of complications (perhaps because they are rushing the job to get more done). If $\beta_2 < 1$, the hospitals that do the most have fewer complications per surgery (perhaps they have the best doctors, and so do more and do them better). Seeing how this could happen if the variable in question were time is a little more complicated. The Poisson distribution arises from the Poisson process , in which the time between events is exponentially distributed, and hence there is a natural connection to survival analysis. In survival analysis, the time to events are often not distributed as an exponential, but the baseline hazard can become greater or lesser over time. Thus, consider a case where you are modeling the number of events that occur following some natural starting point. If $\beta_2 > 1$, that means the rate of events is speeding up, whereas if $\beta_2 < 1$, that means the rate of events is slowing down. For a concrete example of the former, imagine a scan that counts the number of cancer cells a period of time after the initial tumor was surgically removed. For some patients, more time has elapsed since the surgery and you wanted to take that into account. Since once a cancer has regained its foothold it will begin to grow exponentially, the rate will be increasing over the time since the surgery without additional treatment. For a concrete example of the latter, consider the number of people who die of a disease outbreak for which we have no treatment. At first, lots of people die because they were more susceptible to that disease, or already had a compromised immune system, etc. Over time, as the population of people remaining is less susceptible to the disease, the rate will decrease. (Sorry this example is so morbid.)
{ "source": [ "https://stats.stackexchange.com/questions/175349", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/6479/" ] }
175,504
I am doing image classification using machine learning. Suppose I have some training data (images) and will split the data into training and validation sets. And I also want to augment the data (produce new images from the original ones) by random rotations and noise injection. The augmentaion is done offline. Which is the correct way to do data augmentation? First split the data into training and validation sets, then do data augmentation on both training and validation sets. First split the data into training and validation sets, then do data augmentation only on the training set. First do data augmentation on the data, then split the data into training and validation set.
First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it should only contain real world data. Adding augmented data will not improve the accuracy of the validation. It will at best say something about how well your method responds to the data augmentation, and at worst ruin the validation results and interpretability.
{ "source": [ "https://stats.stackexchange.com/questions/175504", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/81067/" ] }
175,509
So assuming your implementation of the motion model and sensor model is at a somewhat satisfying level, the question then is how do I stabilize localization with the re-sampling step. I'm currently dealing with the robot walking down a hallway with 3 doors, and rest wall using a door/wall classifier sensor. Only when it reaches a door, do the importance weight indicate some good confidence of its location, however as soon as the robot walks past the door it's gone. I then thought it might be because my re-samples only focuses on removing low weight particles and replacing randomly, rather than replacing them randomly but closer to high weight particles. The particle filter algorithm follows this sort of approach (after randomizing particles during initialization) 1. for particle i to M 2. x of particle i = x of particle i + velocity + random noise 3. w of particle i = p_door(x)(sensed_door) + p_wall(x)(sensed_wall) 4. normalize all w The original implementation for the resampling I ended up thinking made sense but now believe is completely off(because it's too random) was 1. find max w 2. for particle i to M 3. if w of particle i < max*0.8 4. replace particle i with new particle at random position So I was wondering what is a common way, or a somewhat ideal way to tackle this problem. Current ideas that come to mind are 1. Getting the top three particles out of N, then randomly replacing new particles around each of the three particles. 2. Getting the max weight particle, and threshold at 80% of it's importance weight value, then placing new particles around those that surpass the that threshold. Is this in the right direction for tackling a 1D particle filter problem or am I off?
First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it should only contain real world data. Adding augmented data will not improve the accuracy of the validation. It will at best say something about how well your method responds to the data augmentation, and at worst ruin the validation results and interpretability.
{ "source": [ "https://stats.stackexchange.com/questions/175509", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/21958/" ] }
175,523
I understood that Random Forest and Extremely Randomized Trees differ in the sense that the splits of the trees in the Random Forest are deterministic whereas they are random in the case of an Extremely Randomized Trees (to be more accurate, the next split is the best split among random uniform splits in the selected variables for the current tree). But I don't fully understand the impact of this different splits in various situations. How do they compare in terms of bias/variance ? How do they compare in presence of irrelevant variables ? How do they compare in presence of correlated variables ?
The Extra-(Randomized)-Trees (ET) article contains a bias-variance analysis. In Fig. 6 (on page 16), you can see a comparison with multiple methods including RF on six tests (tree classification and three regression). Both methods are about the same, with the ET being a bit worse when there is a high number of noisy features (in high dimensional data-sets). That said, provided the (perhaps manual) feature selection is near optimal, the performance is about the same, however, ET's can be computationally faster. From the article itself: The analysis of the algorithm and the determination of the optimal value of K on several test problem variants have shown that the value is in principle dependent on problem specifics, in particular the proportion of irrelevant attributes . [...] The bias/variance analysis has shown that Extra-Trees work by decreasing variance while at the same time increasing bias . [...] When the randomization is increased above the optimal level, variance decreases slightly while bias increases often significantly. No silver bullet as always. Pierre Geurts, Damien Ernst, Louis Wehenkel. "Extremely randomized trees"
{ "source": [ "https://stats.stackexchange.com/questions/175523", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/73794/" ] }
176,034
What is the difference between the negative binomial distribution and the binomial distribution? I tried reading online, and I found that the negative binomial distribution is used when data points are discrete, but I think even the binomial distribution can be used for discrete data points.
The difference is what we are interested in. Both distributions are built from independent Bernoulli trials with fixed probability of success, p . With the Binomial distribution, the random variable X is the number of successes observed in n trials. Because there are a fixed number of trials, the possible values of X are 0, 1, ..., n . With the Negative Binomial distribution, the random variable Y is the number of trials until observed the r th success is observed. In this case, we keep increasing the number of trials until we reach r successes. The possible values of Y are r , r+1 , r+2 , ... with no upper bound. The Negative Binomial can also be defined in terms of the number of failures until the r th success, instead of the number of trials until the r th success. Wikipedia defines the Negative Binomial distribution in this manner. So to summarize: Binomial : Fixed number of trials ( n ) Fixed probability of success ( p ) Random variable is X = Number of successes. Possible values are 0 ≤ X ≤ n Negative Binomial : Fixed number of successes ( r ) Fixed probability of success ( p ) Random variable is Y = Number of trials until the r th success. Possible values are r ≤ Y Thanks to Ben Bolker for reminding me to mention the support of the two distributions. He answered a related question here .
{ "source": [ "https://stats.stackexchange.com/questions/176034", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/90391/" ] }
176,384
This question has been has asked before here and here but I don't think the answers address the question directly. Do underpowered studies have increased likelihood of false positives? Some news articles make this assertion. For example : Low statistical power is bad news. Underpowered studies are more likely to miss genuine effects, and as a group they're more likely to include a higher proportion of false positives -- that is, effects that reach statistical significance even though they are not real. As I understand it, the power of a test can be increased by: increasing the sample size having a larger effect size increasing the significance level Assuming we don't want to change the significance level, I believe the quote above refers to changing the sample size. However, I don't see how decreasing the sample should increase the number of false positives. To put it simply, reducing the power of a study increases the chances of false negatives, which responds to the question: $$P(\text{failure to reject }H_{0}|H_{0}\text{ is false})$$ On the contrary, false positives respond to the question: $$P(\text{reject }H_{0}|H_{0}\text{ is true})$$ Both are different questions because the conditionals are different. Power is (inversely) related to false negatives but not to false positives. Am I missing something?
You are correct in that sample size affects power (i.e. 1 - type II error), but not type I error. It's a common misunderstanding that a p-value as such (correctly interpreted) is less reliable or valid when the sample size is small - the very entertaining article by Friston 2012 has a funny take on that [1]. That being said, the issues with underpowered studies are real, and the quote is largely correct I would say, only a bit imprecise in its wording. The basic problem with underpowered studies in the standard NHST framework is that, although the rate of false positives (type I error) is fixed, the rate of true positives (power) goes down, and thus the relative proportion of true positives in all positives. Hence, if we assume that people tend to publish only positive (= significant) results, a reported positive result is less likely to be a true positive when the study is underpowered. This idea is expressed in the false discovery rate [2], see also [3]. This seems what the quote refers to. An additional issue often named regarding underpowered studies is that they lead to overestimated effect sizes. The reasons is that a) with lower power, your estimates of the true effects will become more variable (stochastic) around their true value, and b) only the strongest of those effects will pass the significance filter when the power is low. One should add though that this is a reporting problem that could easily be fixed by discussing and reporting all and not only significant effects. Finally, an important practical issue with underpowered studies is that low power increases statistical issues (e.g. bias of estimators) as well as the temptation for playing around with variables and similar p-hacking tactics. Using these "researcher degrees of freedom" is most effective when the power is low, and THIS can increase type I error after all, see, e.g., [4]. For all these reasons, I would therefore be indeed skeptical about an underpowered study. [1] Friston, K. (2012) Ten ironic rules for non-statistical reviewers. NeuroImage, 61, 1300-1310. [2] https://en.wikipedia.org/wiki/False_discovery_rate [3] Button, K. S.; Ioannidis, J. P. A.; Mokrysz, C.; Nosek, B. A.; Flint, J.; Robinson, E. S. J. & Munafo, M. R. (2013) Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci., 14, 365-376 [4] Simmons, J. P.; Nelson, L. D. & Simonsohn, U. (2011) False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant. Psychol Sci., 22, 1359-1366.
{ "source": [ "https://stats.stackexchange.com/questions/176384", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/2676/" ] }
176,599
When explaining LASSO regression, the diagram of a diamond and circle is often used. It is said that because the shape of the constraint in LASSO is a diamond, the least squares solution obtained might touch the corner of the diamond such that it leads to a shrinkage of some variable. However, in ridge regression, because it is a circle, it will often not touch the axis. I could not understand why it cannot touch the axis or maybe have a lower probability than LASSO to shrink certain parameters. On top of that, why do LASSO and ridge have lower variance than ordinary least squares? The above is my understanding of ridge and LASSO and I might be wrong. Can someone help me understand why these two regression methods have lower variance?
This is regarding the variance OLS provides what is called the Best Linear Unbiased Estimator (BLUE) . That means that if you take any other unbiased estimator, it is bound to have a higher variance then the OLS solution. So why on earth should we consider anything else than that? Now the trick with regularization, such as the lasso or ridge, is to add some bias in turn to try to reduce the variance. Because when you estimate your prediction error, it is a combination of three things : $$ \text{E}[(y-\hat{f}(x))^2]=\text{Bias}[\hat{f}(x))]^2 +\text{Var}[\hat{f}(x))]+\sigma^2 $$ The last part is the irreducible error, so we have no control over that. Using the OLS solution the bias term is zero. But it might be that the second term is large. It might be a good idea, ( if we want good predictions ), to add in some bias and hopefully reduce the variance. So what is this $\text{Var}[\hat{f}(x))]$? It is the variance introduced in the estimates for the parameters in your model. The linear model has the form $$ \mathbf{y}=\mathbf{X}\beta + \epsilon,\qquad \epsilon\sim\mathcal{N}(0,\sigma^2I) $$ To obtain the OLS solution we solve the minimization problem $$ \arg \min_\beta ||\mathbf{y}-\mathbf{X}\beta||^2 $$ This provides the solution $$ \hat{\beta}_{\text{OLS}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} $$ The minimization problem for ridge regression is similar: $$ \arg \min_\beta ||\mathbf{y}-\mathbf{X}\beta||^2+\lambda||\beta||^2\qquad \lambda>0 $$ Now the solution becomes $$ \hat{\beta}_{\text{Ridge}} = (\mathbf{X}^T\mathbf{X}+\lambda I)^{-1}\mathbf{X}^T\mathbf{y} $$ So we are adding this $\lambda I$ (called the ridge) on the diagonal of the matrix that we invert. The effect this has on the matrix $\mathbf{X}^T\mathbf{X}$ is that it " pulls " the determinant of the matrix away from zero. Thus when you invert it, you do not get huge eigenvalues. But that leads to another interesting fact, namely that the variance of the parameter estimates becomes lower. I am not sure if I can provide a more clear answer then this. What this all boils down to is the covariance matrix for the parameters in the model and the magnitude of the values in that covariance matrix. I took ridge regression as an example, because that is much easier to treat. The lasso is much harder and there is still active ongoing research on that topic. These slides provide some more information and this blog also has some relevant information. EDIT: What do I mean that by adding the ridge the determinant is " pulled " away from zero? Note that the matrix $\mathbf{X}^T\mathbf{X}$ is a positive definite symmetric matrix. Note that all symmetric matrices with real values have real eigenvalues. Also since it is positive definite, the eigenvalues are all greater than zero. Ok so how do we calculate the eigenvalues? We solve the characteristic equation: $$ \text{det}(\mathbf{X}^T\mathbf{X}-tI)=0 $$ This is a polynomial in $t$, and as stated above, the eigenvalues are real and positive. Now let's take a look at the equation for the ridge matrix we need to invert: $$ \text{det}(\mathbf{X}^T\mathbf{X}+\lambda I-tI)=0 $$ We can change this a little bit and see: $$ \text{det}(\mathbf{X}^T\mathbf{X}-(t-\lambda)I)=0 $$ So we can solve this for $(t-\lambda)$ and get the same eigenvalues as for the first problem. Let's assume that one eigenvalue is $t_i$. So the eigenvalue for the ridge problem becomes $t_i+\lambda$. It gets shifted by $\lambda$. This happens to all the eigenvalues, so they all move away from zero. Here is some R code to illustrate this: # Create random matrix A <- matrix(sample(10,9,T),nrow=3,ncol=3) # Make a symmetric matrix B <- A+t(A) # Calculate eigenvalues eigen(B) # Calculate eigenvalues of B with ridge eigen(B+3*diag(3)) Which gives the results: > eigen(B) $values [1] 37.368634 6.952718 -8.321352 > eigen(B+3*diag(3)) $values [1] 40.368634 9.952718 -5.321352 So all the eigenvalues get shifted up by exactly 3. You can also prove this in general by using the Gershgorin circle theorem . There the centers of the circles containing the eigenvalues are the diagonal elements. You can always add "enough" to the diagonal element to make all the circles in the positive real half-plane. That result is more general and not needed for this.
{ "source": [ "https://stats.stackexchange.com/questions/176599", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79151/" ] }
176,671
During the first half of 2015 I did the coursera course of Machine Learning (by Andrew Ng, GREAT course). And learned the basics of machine learning (linear regression, logistic regression, SVM, Neuronal Networks...) Also I have been a developer for 10 years, so learning a new programming language would not be a problem. Lately, I have started learning R in order to implement machine learning algorithms. However I have realized that if I want to keep learning I will need a more formal knowledge of statistics, currently I have a non-formal knowledge of it, but so limited that, for example, I could not properly determine which of several linear models would be better (normally I tend to use R-square for it, but apparently that is not a very good idea). So to me it seems pretty obvious that I need to learn the basics of statistics (I studied that in uni but forgot most of it), where should I learn, please note that I don't really need a fully comprehensive course, just something that within a month allows me to know enough so I can get eager and learn more :). So far I have read about " Statistics without tears ", any other suggestion?
I would suggest you a basic road-map about how to go about it: You can brush up basic math and stats at Khan Academy, and/or take the Intro to Statistics course by Udacity . Then, you can take these two nice courses of Udacity. Descriptive Statistics and Inferential Statistics Then, you can dive into some Bayesian stats. And one of the best-related resource on the web which I have found is the Think Bayes free e-book Then, dive into the basics of Machine Learning. Coursera's Andrew Ng's course is the perfect start. And this resource: Machine Learning for developers is also very useful for skimming through the concepts quickly. Then, you are on your own. You have enough resources and blogs on the internet for building up on these concepts. Bonus: A wonderful site for such road maps is Metacademy , which I personally would vouch as one of the best Data Science resources on the web. Gitxiv is another beautiful site, which connects the Arxiv research papers on Data Science with the relevant open source implementations/libraries.
{ "source": [ "https://stats.stackexchange.com/questions/176671", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91492/" ] }
177,102
I have read about singular value decomposition (SVD). In almost all textbooks it is mentioned that it factorizes the matrix into three matrices with given specification. But what is the intuition behind splitting the matrix in such form? PCA and other algorithms for dimensionality reduction are intuitive in the sense that algorithm has nice visualization property but with SVD it is not the case.
Write the SVD of matrix $X$ (real, $n\times p$ ) as $$ X = U D V^T $$ where $U$ is $n\times p$ , $D$ is diagonal $p\times p$ and $V^T$ is $p\times p$ . In terms of the columns of the matrices $U$ and $V$ we can write $X=\sum_{i=1}^p d_i u_i v_i^T$ . That shows $X$ written as a sum of $p$ rank-1 matrices. What does a rank-1 matrix look like? Let's see: $$ \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} \begin{pmatrix} 4 & 5 & 6 \end{pmatrix} = \begin{pmatrix} 4 & 5 & 6 \\ 8 & 10 & 12 \\ 12 & 15 & 18 \end{pmatrix} $$ The rows are proportional, and the columns are proportional. Think now about $X$ as containing the grayscale values of a black-and-white image, each entry in the matrix representing one pixel. For instance the following picture of a baboon: Then read this image into R and get the matrix part of the resulting structure, maybe using the library pixmap . If you want a step-by-step guide as to how to reproduce the results, you can find the code here . Calculate the SVD: baboon.svd <- svd(bab) # May take some time How can we think about this? We get the $512 \times 512$ baboon image represented as a sum of $512$ simple images, with each one only showing vertical and horizontal structure, i.e. it is an image of vertical and horizontal stripes! So, the SVD of the baboon represents the baboon image as a superposition of $512$ simple images, each one only showing horizontal/vertical stripes. Let us calculate a low-rank reconstruction of the image with $1$ and with $20$ components: baboon.1 <- sweep(baboon.svd $u[, 1, drop=FALSE], 2, baboon.svd$ d[1], "*") %*% t(baboon.svd$v[, 1, drop=FALSE]) baboon.20 <- sweep(baboon.svd $u[, 1:20, drop=FALSE], 2, baboon.svd$ d[1:20], "*") %*% t(baboon.svd$v[ , 1:20, drop=FALSE]) resulting in the following two images: On the left we can easily see the vertical/horizontal stripes in the rank-1 image. Let us finally look at the "residual image", the image reconstructed (as above, code not shown) from the $20$ rank-one images with the lowest singular values. Here it is: Which is quite interesting: we see the parts of the original image that are difficult to represent as superposition of vertical/horizontal lines, mostly diagonal nose hair and some texture, and the eyes!
{ "source": [ "https://stats.stackexchange.com/questions/177102", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79135/" ] }
177,210
I was looking through the literature on regularization, and often see paragraphs that links L2 regulatization with Gaussian prior, and L1 with Laplace centered on zero. I know how these priors look like, but I don't understand, how it translates to, for example, weights in linear model. In L1, if I understand correctly, we expect sparse solutions, i.e. some weights will be pushed to exactly zero. And in L2 we get small weights but not zero weights. But why does it happen? Please comment if I need to provide more information or clarify my path of thinking.
The relation of Laplace distribution prior with median (or L1 norm) was found by Laplace himself, who found that using such prior you estimate median rather than mean as with Normal distribution (see Stingler, 1986 or Wikipedia ). This means that regression with Laplace errors distribution estimates the median (like e.g. quantile regression), while Normal errors refer to OLS estimate. The robust priors you asked about were described also by Tibshirani (1996) who noticed that robust Lasso regression in Bayesian setting is equivalent to using Laplace prior. Such prior for coefficients are centered around zero (with centered variables) and has wide tails - so most regression coefficients estimated using it end up being exactly zero. This is clear if you look closely at the picture below, Laplace distribution has a peak around zero (there is a greater distribution mass), while Normal distribution is more diffuse around zero, so non-zero values have greater probability mass. Other possibilities for robust priors are Cauchy or $t$- distributions. Using such priors you are more prone to end up with many zero-valued coefficients, some moderate-sized and some large-sized (long tail), while with Normal prior you get more moderate-sized coefficients that are rather not exactly zero, but also not that far from zero. (image source Tibshirani, 1996) Stigler, S.M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, MA: Belknap Press of Harvard University Press. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360-1383. Norton, R.M. (1984). The Double Exponential Distribution: Using Calculus to Find a Maximum Likelihood Estimator. The American Statistician, 38(2): 135-136.
{ "source": [ "https://stats.stackexchange.com/questions/177210", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/75214/" ] }
177,219
When I run a glm with binomial-family (logistic regression), R output gives me the logit-estimates, which can be transformed into probabilities using plogis(logit) . So using something like plogis(predict(glm_fit, type = "terms")) would give me the adjusted probabilities of success for each predictor. But what would be the equivalent for Poisson regression? How can I "predict" the adjusted incidents rates for each predictor? Given this example: set.seed(123) dat <- data.frame(y = rpois(100, 1.5), x1 = round(runif(n = 100, 30, 70)), x2 = rbinom(100, size = 1, prob = .8), x3 = round(abs(rnorm(n = 100, 10, 5)))) fit <- glm(y ~ x1 + x2 + x3, family = poisson(), data = dat) and using predict.glm(fit, type = "terms") I get: x1 x2 x3 1 -0.023487964 0.04701003 0.02563723 2 0.052058119 -0.20041119 0.02563723 3 0.003983339 0.04701003 0.01255701 4 -0.119637524 0.04701003 -0.03322376 5 0.010851165 0.04701003 -0.00706332 6 -0.105901873 -0.20041119 -0.00706332 ... attr(,"constant") [1] 0.3786072 So, how many "incidents" (y-value) would I expect for each value of x1 , holding x2 and x3 constant (what predict does, afaik)? I'm not sure whether this question fits better into Stackoverflow or Cross Validated - please excuse if posting here was wrong!
The relation of Laplace distribution prior with median (or L1 norm) was found by Laplace himself, who found that using such prior you estimate median rather than mean as with Normal distribution (see Stingler, 1986 or Wikipedia ). This means that regression with Laplace errors distribution estimates the median (like e.g. quantile regression), while Normal errors refer to OLS estimate. The robust priors you asked about were described also by Tibshirani (1996) who noticed that robust Lasso regression in Bayesian setting is equivalent to using Laplace prior. Such prior for coefficients are centered around zero (with centered variables) and has wide tails - so most regression coefficients estimated using it end up being exactly zero. This is clear if you look closely at the picture below, Laplace distribution has a peak around zero (there is a greater distribution mass), while Normal distribution is more diffuse around zero, so non-zero values have greater probability mass. Other possibilities for robust priors are Cauchy or $t$- distributions. Using such priors you are more prone to end up with many zero-valued coefficients, some moderate-sized and some large-sized (long tail), while with Normal prior you get more moderate-sized coefficients that are rather not exactly zero, but also not that far from zero. (image source Tibshirani, 1996) Stigler, S.M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, MA: Belknap Press of Harvard University Press. Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288. Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360-1383. Norton, R.M. (1984). The Double Exponential Distribution: Using Calculus to Find a Maximum Likelihood Estimator. The American Statistician, 38(2): 135-136.
{ "source": [ "https://stats.stackexchange.com/questions/177219", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54740/" ] }
178,587
As stated in this question, the maximum rank of covariance matrix is $n-1$ where $n$ is sample size and so if the dimension of covariance matrix is equal to the sample size, it would be singular. I can't understand why we subtract $1$ from the maximum rank $n$ of covariance matrix.
The unbiased estimator of the sample covariance matrix given $n$ data points $\newcommand{\x}{\mathbf x}\x_i \in \mathbb R^d$ is $$\mathbf C = \frac{1}{n-1}\sum_{i=1}^n (\x_i - \bar \x)(\x_i - \bar \x)^\top,$$ where $\bar \x = \sum \x_i /n$ is the average over all points. Let us denote $(\x_i-\bar \x)$ as $\newcommand{\z}{\mathbf z}\z_i$. The $\frac{1}{n-1}$ factor does not change the rank, and each term in the sum has (by definition) rank $1$, so the core of the question is as follows: Why does $\sum \z_i\z_i^\top$ have rank $n-1$ and not rank $n$, as it would seem because we are summing $n$ rank-$1$ matrices? The answer is that it happens because $\z_i$ are not independent. By construction, $\sum\z_i = 0$. So if you know $n-1$ of $\z_i$, then the last remaining $\z_n$ is completely determined; we are not summing $n$ independent rank-$1$ matrices, we are summing only $n-1$ independent rank-$1$ matrices and then adding one more rank-$1$ matrix that is fully linearly determined by the rest. This last addition does not change the overall rank. We can see this directly if we rewrite $\sum\z_i = 0$ as $$\z_n = -\sum_{i=1}^{n-1}\z_i,$$ and now plug it into the above expression: $$\sum_{i=1}^n \z_i\z_i^\top = \sum_{i=1}^{n-1} \z_i\z_i^\top + \Big(-\sum_{i=1}^{n-1}\z_i\Big)\z_n^\top=\sum_{i=1}^{n-1} \z_i(\z_i-\z_n)^\top.$$ Now there is only $n-1$ terms left in the sum and it becomes clear that the whole sum can have at most rank $n-1$. This result, by the way, hints to why the factor in the unbiased estimator of covariance is $\frac{1}{n-1}$ and not $\frac{1}{n}$. The geometric intuition that I alluded to in the comments above is that one can always fit a 1D line to any two points in 2D and one can always fit a 2D plane to any three points in 3D, i.e. the dimensionality of the subspace is always $n-1$; this only works because we assume that this line (and plane) can be "moved around" in order to fit our points. "Positioning" this line (or plane) such that it passes through $\bar \x$ is equivalent of centering in the algebraic argument above.
{ "source": [ "https://stats.stackexchange.com/questions/178587", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/79040/" ] }
178,588
Suppose a vector of random variables $(X_1,...,X_n,Y_1,...,Y_m)$ is such that $X\sim F(\cdot)$ and $Y\sim G(\cdot)$ . So $X$ are distributed independently and identically as $F(\cdot)$ and $Y$ as $G(\cdot)$ . We only observe $n+m$ ordered variables $(Z_1<Z_2<.....<Z_{n+m})$ . The question is, can we recover the two CDFs $F(\cdot)$ and $G(\cdot)$ from the $Z's$ ? Edit: When I said recover I meant to identify, which is a different problem from estimate but related. the idea being that if I have "infinitely" many iid observations from $F$ , I can identify and then estimate $F$ (using ECDF and invoke some consistency). Here I only know $Z_1<\ldots <Z_{n+m}$ and I know that there are two CDFs that generate $Z$ 's. I am sorry if my statement was confusing.
The unbiased estimator of the sample covariance matrix given $n$ data points $\newcommand{\x}{\mathbf x}\x_i \in \mathbb R^d$ is $$\mathbf C = \frac{1}{n-1}\sum_{i=1}^n (\x_i - \bar \x)(\x_i - \bar \x)^\top,$$ where $\bar \x = \sum \x_i /n$ is the average over all points. Let us denote $(\x_i-\bar \x)$ as $\newcommand{\z}{\mathbf z}\z_i$. The $\frac{1}{n-1}$ factor does not change the rank, and each term in the sum has (by definition) rank $1$, so the core of the question is as follows: Why does $\sum \z_i\z_i^\top$ have rank $n-1$ and not rank $n$, as it would seem because we are summing $n$ rank-$1$ matrices? The answer is that it happens because $\z_i$ are not independent. By construction, $\sum\z_i = 0$. So if you know $n-1$ of $\z_i$, then the last remaining $\z_n$ is completely determined; we are not summing $n$ independent rank-$1$ matrices, we are summing only $n-1$ independent rank-$1$ matrices and then adding one more rank-$1$ matrix that is fully linearly determined by the rest. This last addition does not change the overall rank. We can see this directly if we rewrite $\sum\z_i = 0$ as $$\z_n = -\sum_{i=1}^{n-1}\z_i,$$ and now plug it into the above expression: $$\sum_{i=1}^n \z_i\z_i^\top = \sum_{i=1}^{n-1} \z_i\z_i^\top + \Big(-\sum_{i=1}^{n-1}\z_i\Big)\z_n^\top=\sum_{i=1}^{n-1} \z_i(\z_i-\z_n)^\top.$$ Now there is only $n-1$ terms left in the sum and it becomes clear that the whole sum can have at most rank $n-1$. This result, by the way, hints to why the factor in the unbiased estimator of covariance is $\frac{1}{n-1}$ and not $\frac{1}{n}$. The geometric intuition that I alluded to in the comments above is that one can always fit a 1D line to any two points in 2D and one can always fit a 2D plane to any three points in 3D, i.e. the dimensionality of the subspace is always $n-1$; this only works because we assume that this line (and plane) can be "moved around" in order to fit our points. "Positioning" this line (or plane) such that it passes through $\bar \x$ is equivalent of centering in the algebraic argument above.
{ "source": [ "https://stats.stackexchange.com/questions/178588", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93112/" ] }
178,609
I am busy with ruin theory. $$ S(t) = \sum_{i=1}^{N(t)} X_i $$ $S(t)$ is the aggregate claim size after $t$ years, where $X_i$ is the individual claim size (with mean and variance given) and $N(t)$ is the number of claims that follow a Poisson distribution with parameter $\lambda$, and it is assumed that $\lambda$ has an exponential distribution with given mean. Now since $N(t)$ has a Poisson distribution, $S(t)$ has a Compound Poisson distribution with parameter $\lambda$, right? Then is the expected value of $S(t)$: $E[S(t)] = E[N(t)]\cdot E[X_i]$ $\,\:\quad\qquad = (E[\lambda]\cdot t)\cdot E[X_i]\,$ ? I am specifically confused about the $E[N(t)]$ part, does it include the $t$ variable even though it is only distributed Poisson($\lambda$) or not? And then do you use $E[\lambda]$ or only $\lambda$ in calculating the $E[N(t)]$? Also, the variance of $N(t)$, is it equal to the variance of $\lambda$ or the expected value of $\lambda$? And again should it be multiplied by $t$ even though $t$ is not given as part of the parameter in the question? Would be so glad if anyone can help. And please ask if there is any more information needed to answer the question.
The unbiased estimator of the sample covariance matrix given $n$ data points $\newcommand{\x}{\mathbf x}\x_i \in \mathbb R^d$ is $$\mathbf C = \frac{1}{n-1}\sum_{i=1}^n (\x_i - \bar \x)(\x_i - \bar \x)^\top,$$ where $\bar \x = \sum \x_i /n$ is the average over all points. Let us denote $(\x_i-\bar \x)$ as $\newcommand{\z}{\mathbf z}\z_i$. The $\frac{1}{n-1}$ factor does not change the rank, and each term in the sum has (by definition) rank $1$, so the core of the question is as follows: Why does $\sum \z_i\z_i^\top$ have rank $n-1$ and not rank $n$, as it would seem because we are summing $n$ rank-$1$ matrices? The answer is that it happens because $\z_i$ are not independent. By construction, $\sum\z_i = 0$. So if you know $n-1$ of $\z_i$, then the last remaining $\z_n$ is completely determined; we are not summing $n$ independent rank-$1$ matrices, we are summing only $n-1$ independent rank-$1$ matrices and then adding one more rank-$1$ matrix that is fully linearly determined by the rest. This last addition does not change the overall rank. We can see this directly if we rewrite $\sum\z_i = 0$ as $$\z_n = -\sum_{i=1}^{n-1}\z_i,$$ and now plug it into the above expression: $$\sum_{i=1}^n \z_i\z_i^\top = \sum_{i=1}^{n-1} \z_i\z_i^\top + \Big(-\sum_{i=1}^{n-1}\z_i\Big)\z_n^\top=\sum_{i=1}^{n-1} \z_i(\z_i-\z_n)^\top.$$ Now there is only $n-1$ terms left in the sum and it becomes clear that the whole sum can have at most rank $n-1$. This result, by the way, hints to why the factor in the unbiased estimator of covariance is $\frac{1}{n-1}$ and not $\frac{1}{n}$. The geometric intuition that I alluded to in the comments above is that one can always fit a 1D line to any two points in 2D and one can always fit a 2D plane to any three points in 3D, i.e. the dimensionality of the subspace is always $n-1$; this only works because we assume that this line (and plane) can be "moved around" in order to fit our points. "Positioning" this line (or plane) such that it passes through $\bar \x$ is equivalent of centering in the algebraic argument above.
{ "source": [ "https://stats.stackexchange.com/questions/178609", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93129/" ] }
178,626
I have seen the min-max normalization formula but that normalizes values between 0 and 1. How would I normalize my data between -1 and 1? I have both negative and positive values in my data matrix.
With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$ . To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In general, you can always get a new variable $x'''$ in $[a,b]$ : $$ x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a $$ And in case you want to bring a variable back to its original value you can do it because these are linear transformations and thus invertible . For example: $$ x = (x''' - a)\frac{(\max{x} - \min{x})}{b-a} + \min{x} $$ An example in Python: import numpy as np x = np.array([1, 3, 4, 5, -1, -7]) # goal : range [0, 1] x1 = (x - min(x)) / ( max(x) - min(x) ) print(x1) >>> [0.66666667 0.83333333 0.91666667 1. 0.5 0.]
{ "source": [ "https://stats.stackexchange.com/questions/178626", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/49326/" ] }
178,637
Suppose there are 3 advertisers who want to show an ad on a site. If a user clicks on an ad and makes a purchase the host of the site makes a certain percentage of the purchase. Suppose we have historical data on all 3 ads for a set of individuals, and we have variables: conversion(0/1: the user clicked on the ad and made a purchase) gender(male/female) the amount of time since the last visit, etc. The owner of the site wants to know what ad to show to a particular person with a given set of characteristics(variables in historical data). What would be a good model for this? It seems that there is no "single" response variable. I was thinking of building a logistic regression where conversion(0/1) is dependent variable, and type of ad(3 values) is independent variable but I am not sure if this is correct. Do you have any suggestions on what a good modeling paradigm would be?
With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$ . To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In general, you can always get a new variable $x'''$ in $[a,b]$ : $$ x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a $$ And in case you want to bring a variable back to its original value you can do it because these are linear transformations and thus invertible . For example: $$ x = (x''' - a)\frac{(\max{x} - \min{x})}{b-a} + \min{x} $$ An example in Python: import numpy as np x = np.array([1, 3, 4, 5, -1, -7]) # goal : range [0, 1] x1 = (x - min(x)) / ( max(x) - min(x) ) print(x1) >>> [0.66666667 0.83333333 0.91666667 1. 0.5 0.]
{ "source": [ "https://stats.stackexchange.com/questions/178637", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/20884/" ] }
178,675
I want to run a regression with many variables. Besides the hypothesized direct relationships between IV and DV, there might be some moderating and mediating relationships (but I don't know which) that I want to determine. I know how to test whether there is a moderating or mediating relationship between certain variables. Here is my question: However, how do I know whether there could be such relationships? I mean, under which circumstances can I assume such potential relationships? What is the indication that leads to testing such relationships? I am using SPSS. Thank you.
With: $$ x' = \frac{x - \min{x}}{\max{x} - \min{x}} $$ you normalize your feature $x$ in $[0,1]$ . To normalize in $[-1,1]$ you can use: $$ x'' = 2\frac{x - \min{x}}{\max{x} - \min{x}} - 1 $$ In general, you can always get a new variable $x'''$ in $[a,b]$ : $$ x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a $$ And in case you want to bring a variable back to its original value you can do it because these are linear transformations and thus invertible . For example: $$ x = (x''' - a)\frac{(\max{x} - \min{x})}{b-a} + \min{x} $$ An example in Python: import numpy as np x = np.array([1, 3, 4, 5, -1, -7]) # goal : range [0, 1] x1 = (x - min(x)) / ( max(x) - min(x) ) print(x1) >>> [0.66666667 0.83333333 0.91666667 1. 0.5 0.]
{ "source": [ "https://stats.stackexchange.com/questions/178675", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93184/" ] }
178,854
I'm trying to understand the reasoning by choosing a specific test approach when dealing with a simple A/B test - (i.e. two variations/groups with a binary respone (converted or not). As an example I will be using the data below Version Visits Conversions A 2069 188 B 1826 220 The top answer here is great and talks about some of the underlying assumptions for z, t and chi square tests. But what I find confusing is that different online resources will cite different approaches, and you would think the assumptions for a basic A/B test should be pretty much the same? For instance, this article uses z-score : This article uses the following formula (which I'm not sure if it's different from the zscore calculation?): This paper references the t test(p 152): So what arguemnts can be made in favor of these different approaches? Why would one have a preference? To throw in one more candidate, the table above can be rewritten as a 2x2 contingency table, where Fisher's exact test (p5) can be used Non converters Converters Row Total Version A 1881 188 2069 Versions B 1606 220 1826 Column Total 3487 408 3895 But according to this thread fisher's exact test should only be used with smaller sample sizes (what's the cut off?) And then there's paired t and z tests,f test (and logistic regression, but I want to leave that out for now)....I feel like I'm drowning in different test approaches, and I just want to be able to make some kind of argument for the different methods in this simple A/B test case. Using the example data I'm getting the following p-values https://vwo.com/ab-split-test-significance-calculator/ gives a p-value of 0.001 (z-score) http://www.evanmiller.org/ab-testing/chi-squared.html (using chi square test) gives a p-value of 0.00259 And in R fisher.test(rbind(c(1881,188),c(1606,220)))$p.value gives a p-value of 0.002785305 Which I guess are all pretty close... Anyway - just hoping for some healthy discussion on what approaches to use in online testing where sample sizes are usually in the thousands, and response ratios are often 10% or less. My gut is telling me to use chi-square, but I want to be able to answer exactly why I'm choosing it over the other multitude of ways to do it.
We use these tests for different reasons and under different circumstances. $z$ -test. A $z$ -test assumes that our observations are independently drawn from a Normal distribution with unknown mean and known variance. A $z$ -test is used primarily when we have quantitative data. (i.e. weights of rodents, ages of individuals, systolic blood pressure, etc.) However, $z$ -tests can also be used when interested in proportions. (i.e. the proportion of people who get at least eight hours of sleep, etc.) $t$ -test. A $t$ -test assumes that our observations are independently drawn from a Normal distribution with unknown mean and unknown variance. Note that with a $t$ -test, we do not know the population variance. This is far more common than knowing the population variance, so a $t$ -test is generally more appropriate than a $z$ -test, but practically there will be little difference between the two if sample sizes are large. With $z$ - and $t$ -tests, your alternative hypothesis will be that your population mean (or population proportion) of one group is either not equal, less than, or greater than the population mean (or proportion) of the other group. This will depend on the type of analysis you seek to do, but your null and alternative hypotheses directly compare the means/proportions of the two groups. Chi-squared test. Whereas $z$ - and $t$ -tests concern quantitative data (or proportions in the case of $z$ ), chi-squared tests are appropriate for qualitative data. Again, the assumption is that observations are independent of one another. In this case, you aren't seeking a particular relationship. Your null hypothesis is that no relationship exists between variable one and variable two. Your alternative hypothesis is that a relationship does exist. This doesn't give you specifics as to how this relationship exists (i.e. in which direction the relationship goes) but it will provide evidence that a relationship does (or does not) exist between your independent variable and your groups. Fisher's exact test. One drawback to the chi-squared test is that it is asymptotic. This means that the $p$ -value is accurate for very large sample sizes. However, if your sample sizes are small, then the $p$ -value may not be quite as accurate. As such, Fisher's exact test allows you to exactly calculate the $p$ -value of your data and not rely on approximations that will be poor if your sample sizes are small. I keep discussing sample sizes - different references will give you different metrics as to when your samples are large enough. I would just find a reputable source, look at their rule, and apply their rule to find the test you want. I would not "shop around", so to speak, until you find a rule that you "like". Ultimately, the test you choose should be based on a) your sample size and b) what form you want your hypotheses to take. If you are looking for a specific effect from your A/B test (for example, my B group has higher test scores), then I would opt for a $z$ -test or $t$ -test, pending sample size and the knowledge of the population variance. If you want to show that a relationship merely exists (for example, my A group and B group are different based on the independent variable but I don't care which group has higher scores), then the chi-squared or Fisher's exact test is appropriate, depending on sample size. Does this make sense? Hope this helps!
{ "source": [ "https://stats.stackexchange.com/questions/178854", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/60426/" ] }
179,026
In machine learning, people talk about objective function, cost function, loss function. Are they just different names of the same thing? When to use them? If they are not always refer to the same thing, what are the differences?
These are not very strict terms and they are highly related. However: Loss function is usually a function defined on a data point, prediction and label, and measures the penalty. For example: square loss $l(f(x_i|\theta),y_i) = \left (f(x_i|\theta)-y_i \right )^2$ , used in linear regression hinge loss $l(f(x_i|\theta), y_i) = \max(0, 1-f(x_i|\theta)y_i)$ , used in SVM 0/1 loss $l(f(x_i|\theta), y_i) = 1 \iff f(x_i|\theta) \neq y_i$ , used in theoretical analysis and definition of accuracy Cost function is usually more general. It might be a sum of loss functions over your training set plus some model complexity penalty (regularization). For example: Mean Squared Error $MSE(\theta) = \frac{1}{N} \sum_{i=1}^N \left (f(x_i|\theta)-y_i \right )^2$ SVM cost function $SVM(\theta) = \|\theta\|^2 + C \sum_{i=1}^N \xi_i$ (there are additional constraints connecting $\xi_i$ with $C$ and with training set) Objective function is the most general term for any function that you optimize during training. For example, a probability of generating training set in maximum likelihood approach is a well defined objective function, but it is not a loss function nor cost function (however you could define an equivalent cost function). For example: MLE is a type of objective function (which you maximize) Divergence between classes can be an objective function but it is barely a cost function, unless you define something artificial, like 1-Divergence, and name it a cost Long story short, I would say that: A loss function is a part of a cost function which is a type of an objective function. All that being said, thse terms are far from strict , and depending on context, research group, background, can shift and be used in a different meaning. With the main (only?) common thing being "loss" and "cost" functions being something that want wants to minimise, and objective function being something one wants to optimise (which can be both maximisation or minimisation).
{ "source": [ "https://stats.stackexchange.com/questions/179026", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53799/" ] }
179,511
If two variables have 0 correlation, why are they not necessarily independent? Are zero correlated variables independent under special circumstances ? If possible, I am looking for an intuitive explanation, not a highly technical one.
Correlation measures linear association between two given variables and it has no obligation to detect any other form of association else. So those two variables might be associated in several other non-linear ways and correlation could not distinguish from independent case. As a very didactic, artificial and non realistic example, one can consider $X$ such that $P(X=x)=1/3$ for $x=-1, 0, 1$ and $Y=X^2$. Notice that they are not only associated, but one is a function of the other. Nonetheless, their correlation is 0, for their association is orthogonal to the association that correlation can detect.
{ "source": [ "https://stats.stackexchange.com/questions/179511", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/61158/" ] }
179,674
I want to quantify the relationship between two variables, A and B, using mutual information. The way to compute it is by binning the observations (see example Python code below). However, what factors determines what number of bins is reasonable? I need the computation to be fast so I cannot simply use a lot of bins to be on the safe side. from sklearn.metrics import mutual_info_score def calc_MI(x, y, bins): c_xy = np.histogram2d(x, y, bins)[0] mi = mutual_info_score(None, None, contingency=c_xy) return mi
There is no best number of bins to estimate mutual information (MI) with histograms. The best way is to choose it via cross-validation if you can, or to rely on a rule of thumb. This the reason why many other estimators of MI which are not based on histograms have been proposed. The number of bins will depend to the total number of data points $n$. You should try to avoid too many bins to avoid estimation errors for the joint distribution between the two variables. You should also avoid too few bins to be able to capture the relationship between the two variables. Given that np.histogram2d(x, y, D) generates a 2D histogram with D equal width bins for both x and y I would personally choose: $$ D = \lfloor \sqrt{n/5} \rfloor$$ In this case on average for two uniformly distributed random variables you will have at least $5$ points for each cell of the histogram: $$ \frac{n}{D_X D_Y} \geq 5 \Rightarrow \frac{n}{D^2} \geq 5 \Rightarrow D^2 \leq n/5 \Rightarrow D = \lfloor \sqrt{n/5} \rfloor$$ This is one possible choice that simulates the adaptive partitioning approach proposed in (Cellucci, 2005) . The latter approach is often used to estimate MI to infer genetic networks: e.g. in MIDER . If you have lots of data points $n$ and no missing values you should not worry too much about finding the best number of bins; e.g. if $n = 100,000$. If this is not the case, you might consider to correct MI for finite samples. (Steuer et al., 2002) discusses some correction for MI for the task of genetic network inference. Estimating the number of bins for a histogram is an old problem. You might be interested in this talk by Lauritz Dieckman about estimating the number of bins for MI. This talk is based on a chapter in Mike X Cohen's book about neural time-series. You might choose $D_X$ and $D_Y$ independently and use the rule of thumb used for the estimating the number of bins in 1D histograms. Freedman-Diaconis' rule (no assumption on the distribution): $$D_X = \lceil \frac{\max{X} - \min{X}}{2 \cdot \mbox{IQR} \cdot n^{-1/3}} \rceil$$ where $\mbox{IQR}$ is the difference between the 75-quantile and the 25-quantile. Look at this related question in SE . Scott's rule (normality assumption): $$D_X = \lceil \frac{\max{X} - \min{X}}{3.5 \cdot s_X \cdot n^{-1/3}} \rceil$$ where $s_X$ is the standard deviation for $X$. Sturges' rule (might underestimate the number of bins but good for large $n$): $$D_X = \lceil 1 + \log_2{n} \rceil$$ It is difficult to correctly estimate MI with histograms. You might then choose a different estimator: Kraskov's $k$NN estimator, which is a bit less sensitive to parameter choice: $k = 4$ or $k = 6$ nearest neighbours is often used as default. Paper: (Kraskov, 2003) Estimation of MI with Kernels (Moon, 1995) . There are lots of packages for estimating MI: Non-Parametric Entropy Estimation Toolbox for Python. site . Information-dynamics toolkit in Java but available also for Python. site . ITE toolbox in Matlab. site .
{ "source": [ "https://stats.stackexchange.com/questions/179674", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/29025/" ] }
179,733
Can anyone recommend a good exposition of the theory behind partial least squares regression (available online) for someone who understands SVD and PCA? I have looked at many sources online and have not found anything that had the right combination of rigor and accessibility. I have looked into The Elements of Statistical Learning , which was suggested in a comment on a question asked on Cross Validated , What is partial least squares (PLS) regression and how is it different from OLS? , but I don't think that this reference does the topic justice (it's too brief to do so, and doesn't provide much theory on the subject). From what I've read, PLS exploits linear combinations of the predictor variables, $z_i=X \varphi_i$ that maximize the covariance $ y^Tz_i $ subject to the constraints $\|\varphi_i\|=1$ and $z_i^Tz_j=0$ if $i \neq j$, where the $\varphi_i$ are chosen iteratively, in the order in which they maximize the covariance. But even after all I've read, I'm still uncertain whether that is true, and if so, how the method is executed.
Section 3.5.2 in The Elements of Statistical Learning is useful because it puts PLS regression in the right context (of other regularization methods), but is indeed very brief, and leaves some important statements as exercises. In addition, it only considers a case of a univariate dependent variable $\mathbf y$ . The literature on PLS is vast, but can be quite confusing because there are many different "flavours" of PLS: univariate versions with a single DV $\mathbf y$ (PLS1) and multivariate versions with several DVs $\mathbf Y$ (PLS2), symmetric versions treating $\mathbf X$ and $\mathbf Y$ equally and asymmetric versions ("PLS regression") treating $\mathbf X$ as independent and $\mathbf Y$ as dependent variables, versions that allow a global solution via SVD and versions that require iterative deflations to produce every next pair of PLS directions, etc. etc. All of this has been developed in the field of chemometrics and stays somewhat disconnected from the "mainstream" statistical or machine learning literature. The overview paper that I find most useful (and that contains many further references) is: Rosipal & Krämer, 2006, Overview and Recent Advances in Partial Least Squares For a more theoretical discussion I can further recommend: Frank & Friedman, 1993, A Statistical View of Some Chemometrics Regression Tools A short primer on PLS regression with univariate $y$ (aka PLS1, aka SIMPLS) The goal of regression is to estimate $\beta$ in a linear model $y=X\beta + \epsilon$ . The OLS solution $\beta=(\mathbf X^\top \mathbf X)^{-1}\mathbf X^\top \mathbf y$ enjoys many optimality properties but can suffer from overfitting. Indeed, OLS looks for $\beta$ that yields the highest possible correlation of $\mathbf X \beta$ with $\mathbf y$ . If there is a lot of predictors, then it is always possible to find some linear combination that happens to have a high correlation with $\mathbf y$ . This will be a spurious correlation, and such $\beta$ will usually point in a direction explaining very little variance in $\mathbf X$ . Directions explaining very little variance are often very "noisy" directions. If so, then even though on training data OLS solution performs great, on testing data it will perform much worse. In order to prevent overfitting, one uses regularization methods that essentially force $\beta$ to point into directions of high variance in $\mathbf X$ (this is also called "shrinkage" of $\beta$ ; see Why does shrinkage work? ). One such method is principal component regression (PCR) that simply discards all low-variance directions. Another (better) method is ridge regression that smoothly penalizes low-variance directions. Yet another method is PLS1. PLS1 replaces the OLS goal of finding $\beta$ that maximizes correlation $\operatorname{corr}(\mathbf X \beta, \mathbf y)$ with an alternative goal of finding $\beta$ with length $\|\beta\|=1$ maximizing covariance $$\operatorname{cov}(\mathbf X \beta, \mathbf y)\sim\operatorname{corr}(\mathbf X \beta, \mathbf y)\cdot\sqrt{\operatorname{var}(\mathbf X \beta)},$$ which again effectively penalizes directions of low variance. Finding such $\beta$ (let's call it $\beta_1$ ) yields the first PLS component $\mathbf z_1 = \mathbf X \beta_1$ . One can further look for the second (and then third, etc.) PLS component that has the highest possible covariance with $\mathbf y$ under the constraint of being uncorrelated with all the previous components. This has to be solved iteratively, as there is no closed-form solution for all components (the direction of the first component $\beta_1$ is simply given by $\mathbf X^\top \mathbf y$ normalized to unit length). When the desired number of components is extracted, PLS regression discards the original predictors and uses PLS components as new predictors; this yields some linear combination of them $\beta_z$ that can be combined with all $\beta_i$ to form the final $\beta_\mathrm{PLS}$ . Note that: If all PLS1 components are used, then PLS will be equivalent to OLS. So the number of components serves as a regularization parameter: the lower the number, the stronger the regularization. If the predictors $\mathbf X$ are uncorrelated and all have the same variance (i.e. $\mathbf X$ has been whitened ), then there is only one PLS1 component and it is equivalent to OLS. Weight vectors $\beta_i$ and $\beta_j$ for $i\ne j$ are not going to be orthogonal, but will yield uncorrelated components $\mathbf z_i=\mathbf X \beta_i$ and $\mathbf z_j=\mathbf X \beta_j$ . All that being said, I am not aware of any practical advantages of PLS1 regression over ridge regression (while the latter does have lots of advantages: it is continuous and not discrete, has analytical solution, is much more standard, allows kernel extensions and analytical formulas for leave-one-out cross-validation errors, etc. etc.). Quoting from Frank & Friedman: RR, PCR, and PLS are seen in Section 3 to operate in a similar fashion. Their principal goal is to shrink the solution coefficient vector away from the OLS solution toward directions in the predictor-variable space of larger sample spread. PCR and PLS are seen to shrink more heavily away from the low spread directions than RR, which provides the optimal shrinkage (among linear estimators) for an equidirection prior. Thus PCR and PLS make the assumption that the truth is likely to have particular preferential alignments with the high spread directions of the predictor-variable (sample) distribution. A somewhat surprising result is that PLS (in addition) places increased probability mass on the true coefficient vector aligning with the $K$ th principal component direction, where $K$ is the number of PLS components used, in fact expanding the OLS solution in that direction. They also conduct an extensive simulation study and conclude (emphasis mine): For the situations covered by this simulation study, one can conclude that all of the biased methods (RR, PCR, PLS, and VSS) provide substantial improvement over OLS. [...] In all situations, RR dominated all of the other methods studied. PLS usually did almost as well as RR and usually outperformed PCR, but not by very much. Update: In the comments @cbeleites (who works in chemometrics) suggests two possible advantages of PLS over RR: An analyst can have an a priori guess as to how many latent components should be present in the data; this will effectively allow to set a regularization strength without doing cross-validation (and there might not be enough data to do a reliable CV). Such an a priori choice of $\lambda$ might be more problematic in RR. RR yields one single linear combination $\beta_\mathrm{RR}$ as an optimal solution. In contrast PLS with e.g. five components yields five linear combinations $\beta_i$ that are then combined to predict $y$ . Original variables that are strongly inter-correlated are likely to be combined into a single PLS component (because combining them together will increase the explained variance term). So it might be possible to interpret the individual PLS components as some real latent factors driving $y$ . The claim is that it is easier to interpret $\beta_1, \beta_2,$ etc., as opposed to the joint $\beta_\mathrm{PLS}$ . Compare this with PCR where one can also see as an advantage that individual principal components can potentially be interpreted and assigned some qualitative meaning.
{ "source": [ "https://stats.stackexchange.com/questions/179733", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91750/" ] }
179,864
In order to solve problems of model selection, a number of methods (LASSO, ridge regression, etc.) will shrink the coefficients of predictor variables towards zero. I am looking for an intuitive explanation of why this improves predictive ability. If the true effect of the variable was actually very large, why doesn't shrinking the parameter result in a worse prediction?
Roughly speaking, there are three different sources of prediction error: the bias of your model the variance of your model unexplainable variance We can't do anything about point 3 (except for attempting to estimate the unexplained variance and incorporating it in our predictive densities and prediction intervals). This leaves us with 1 and 2. If you actually have the "right" model, then, say, OLS parameter estimates will be unbiased and have minimal variance among all unbiased (linear) estimators (they are BLUE). Predictions from an OLS model will be best linear unbiased predictions (BLUPs). That sounds good. However, it turns out that although we have unbiased predictions and minimal variance among all unbiased predictions, the variance can still be pretty large. More importantly, we can sometimes introduce "a little" bias and simultaneously save "a lot" of variance - and by getting the tradeoff just right, we can get a lower prediction error with a biased (lower variance) model than with an unbiased (higher variance) one. This is called the "bias-variance tradeoff", and this question and its answers is enlightening: When is a biased estimator preferable to unbiased one? And regularization like the lasso, ridge regression, the elastic net and so forth do exactly that. They pull the model towards zero. (Bayesian approaches are similar - they pull the model towards the priors.) Thus, regularized models will be biased compared to non-regularized models, but also have lower variance. If you choose your regularization right, the result is a prediction with a lower error. If you search for "bias-variance tradeoff regularization" or similar, you get some food for thought. This presentation, for instance, is useful. EDIT: amoeba quite rightly points out that I am handwaving as to why exactly regularization yields lower variance of models and predictions. Consider a lasso model with a large regularization parameter $\lambda$. If $\lambda\to\infty$, your lasso parameter estimates will all be shrunk to zero. A fixed parameter value of zero has zero variance. (This is not entirely correct, since the threshold value of $\lambda$ beyond which your parameters will be shrunk to zero depends on your data and your model. But given the model and the data, you can find a $\lambda$ such that the model is the zero model. Always keep your quantifiers straight.) However, the zero model will of course also have a giant bias. It doesn't care about the actual observations, after all. And the same applies to not-all-that-extreme values of your regularization parameter(s): small values will yield the unregularized parameter estimates, which will be less biased (unbiased if you have the "correct" model), but have higher variance. They will "jump around", following your actual observations. Higher values of your regularization $\lambda$ will "constrain" your parameter estimates more and more. This is why the methods have names like "lasso" or "elastic net": they constrain the freedom of your parameters to float around and follow the data. (I am writing up a little paper on this, which will hopefully be rather accessible. I'll add a link once it's available.)
{ "source": [ "https://stats.stackexchange.com/questions/179864", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93883/" ] }
179,915
So momentum based gradient descent works as follows: $v=\beta m-\eta g$ where $m$ is the previous weight update, and $g$ is the current gradient with respect to the parameters $p$ , $\eta$ is the learning rate, and $\beta$ is a constant. $p_{new} = p + v = p + \beta m - \eta g$ and Nesterov's accelerated gradient descent works as follows: $p_{new} = p + \beta v - \eta g$ which is equivalent to: $p_{new} = p + \beta (\beta m - \eta g ) - \eta g$ or $p_{new} = p + \beta^2 m - (1 + \beta) \eta g$ source: https://github.com/fchollet/keras/blob/master/keras/optimizers.py So to me it seems Nesterov's accelerated gradient descent just gives more weight to the $\eta g$ term over the pervious weight change term m (compared to plain old momentum). Is this interpretation correct?
Arech's answer about Nesterov momentum is correct, but the code essentially does the same thing. So in this regard the Nesterov method does give more weight to the $lr \cdot g$ term, and less weight to the $v$ term. To illustrate why Keras' implementation is correct, I'll borrow Geoffrey Hinton's example . Nesterov method takes the "gamble->correction" approach. $v' = m \cdot v - lr \cdot \nabla(w+m \cdot v)$ $w' = w + v'$ The brown vector is $m \cdot v$ (gamble/jump), the red vector is $-lr \cdot \nabla(w+m \cdot v)$ (correction), and the green vector is $m \cdot v-lr \cdot \nabla(w+m \cdot v)$ (where we should actually move to). $\nabla(\cdot)$ is the gradient function. The code looks different because it moves by the brown vector instead of the green vector , as the Nesterov method only requires evaluating $\nabla(w+m \cdot v) =: g$ instead of $\nabla(w)$. Therefore in each step we want to move back to where we were $(1 \rightarrow 0)$ follow the green vector to where we should be $(0 \rightarrow 2)$ make another gamble $(2 \rightarrow 3)$ Keras' code written for short is $p' = p + m \cdot (m \cdot v - lr \cdot g) - lr \cdot g$, and we do some maths $\begin{align} p' &= p - m \cdot v + m \cdot v + m \cdot (m \cdot v - lr \cdot g) - lr \cdot g\\ &= p - m \cdot v + m \cdot v - lr \cdot g + m \cdot (m \cdot v - lr \cdot g)\\ &= p - m \cdot v + (m \cdot v-lr \cdot g) + m \cdot (m \cdot v-lr \cdot g) \end{align}$ and that's exactly $1 \rightarrow 0 \rightarrow 2 \rightarrow 3$. Actually the original code takes a shorter path $1 \rightarrow 2 \rightarrow 3$. The actual estimated value (green vector) should be $p - m \cdot v$, which should be close to $p$ when learning converges.
{ "source": [ "https://stats.stackexchange.com/questions/179915", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/52391/" ] }
179,941
Let's consider for example a linear regression model. I heard that, in data mining, after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. I heard that one should consider all the variables left in the model as having a true regression coefficient different from zero instead. Can anyone explain me why? Thank you.
after performing a stepwise selection based on the AIC criterion, it is misleading to look at the p-values to test the null hypothesis that each true regression coefficient is zero. Indeed, p-values represent the probability of seeing a test statistic at least as extreme as the one you have, when the null hypothesis is true. If $H_0$ is true, the p-value should have a uniform distribution. But after stepwise selection (or indeed, after a variety of other approaches to model selection), the p-values of those terms that remain in the model don't have that property, even when we know that the null hypothesis is true. This happens because we choose the variables that have or tend to have small p-values (depending on the precise criteria we used). This means that the p-values of the variables left in the model are typically much smaller than they would be if we'd fitted a single model. Note that selection will on average pick models that seem to fit even better than the true model, if the class of models includes the true model, or if the class of models is flexible enough to closely approximate the true model. [In addition and for basically the same reason, the coefficients that remain are biased away from zero and their standard errors are biased low; this in turn impacts confidence intervals and predictions as well -- our prediction intervals will be too narrow for example.] To see these effects, we can take multiple regression where some coefficients are 0 and some are not, perform a stepwise procedure and then for those models that contain variables that had zero coefficients, look at the p-values that result. (In the same simulation, you can look at the estimates and the standard deviations for the coefficients and discover the ones that correspond to non-zero coefficients are also impacted.) In short, it's not appropriate to consider the usual p-values as meaningful. > I heard that one should consider all the variables left in the model as significant instead. As to whether all the values in the model after stepwise should be 'regarded as significant', I'm not sure the extent to which that's a useful way to look at it. What is "significance" intended to mean then? --- Here's the result of running R's `stepAIC` with default settings on 1000 simulated samples with n=100, and ten candidate variables (none of which are related to the response). In each case the number of terms left in the model was counted: [![enter image description here][1]][1] Only 15.5% of the time was the correct model chosen; the rest of the time the model included terms that were not different from zero. If it's actually possible that there are zero-coefficient variables in the set of candidate variables, we are likely to have several terms where the true coefficient is zero in our model. As a result, it's not clear it's a good idea to regard all of them as non-zero. [1]: http://i.stack.imgur.com/k8b5W.png
{ "source": [ "https://stats.stackexchange.com/questions/179941", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/66293/" ] }
180,568
I'm using caret to run a cross validated random forest over a dataset. The Y variable is a factor. There are no NaN's, Inf's, or NA's in my dataset. However when running the random forest, I get Error in randomForest.default(m, y, ...) : NA/NaN/Inf in foreign function call (arg 1) In addition: There were 28 warnings (use warnings() to see them) Warning messages: 1: In data.matrix(x) : NAs introduced by coercion 2: In data.matrix(x) : NAs introduced by coercion 3: In data.matrix(x) : NAs introduced by coercion 4: In data.matrix(x) : NAs introduced by coercion Does anyone have ideas as to if this error is caused by the NA's introduced by coercion? If so, how can i prevent such coercion?
There must be some features in your training set with class 'char' . Please check this > a <- c("1", "2",letters[1:5], "3") > as.numeric(a) [1] 1 2 NA NA NA NA NA 3 Warning message: NAs introduced by coercion
{ "source": [ "https://stats.stackexchange.com/questions/180568", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/17046/" ] }
180,823
I believe that the derivative of a Gaussian process (GP) is a another GP, and so I would like to know if there are closed form equations for the prediction equations of the derivative of a GP? In particular, I am using the squared exponential (also called the Gaussian) covariance kernel and want to know about making predictions about the derivative of the Gaussian process.
The short answer: Yes, if your Gaussian Process (GP) is differentiable, its derivative is again a GP. It can be handled like any other GP and you can calculate predictive distributions. But since a GP $G$ and its derivative $G'$ are closely related you can infer properties of either one from the other. Existence of $G'$ A zero-mean GP with covariance function $K$ is differentiable (in mean square) if $K'(x_1, x_2)=\frac{\partial^2 K}{\partial x_1 \partial x_2}(x_1,x_2)$ exists. In that case the covariance function of $G'$ is equal to $K'$. If the process is not zero-mean, then the mean function needs to be differentiable as well. In that case the mean function of $G'$ is the derivative of the mean function of $G$. (For more details check for example Appendix 10A of A. Papoulis "Probability, random variables and stochastic processes") Since the Gaussian Exponential Kernel is differentiable of any order, this is no problem for you. Predictive distribution for $G'$ This is straightforward if you just want to condition on observations of $G'$: If you can calculate the respective derivatives you know mean and covariance function so that you can do inference with it in the same way as you would do it with any other GP. But you can also derive a predictive distributions for $G'$ based on observations of $G$. You do this by calculating the posterior of $G$ given your observations in the standard way and then applying 1. to the covariance and mean function of the posterior process. This works in the same manner the other way around, i.e. you condition on observations of $G'$ to infer a posterior of $G$. In that case the covariance function of $G$ is given by integrals of $K'$ and might be hard to calculate but the logic is really the same.
{ "source": [ "https://stats.stackexchange.com/questions/180823", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
181,035
According to Miller and Freund's Probability and Statistics for Engineers, 8ed (pp.217-218), the likelihood function to be maximised for binomial distribution (Bernoulli trials) is given as $L(p) = \prod_{i=1}^np^{x_i}(1-p)^{1-x_i}$ How to arrive at this equation? It seems pretty clear to me regarding the other distributions, Poisson and Gaussian; $L(\theta) = \prod_{i=1}^n \text{PDF or PMF of dist.}$ But the one for binomial is just a little different. To be straight forward, how did $nC_x~p^x(1-p)^{n-x}$ become $p^{x_i}(1-p)^{1-x_i}$ in the above likelihood function?
In maximum likelihood estimation, you are trying to maximize $nC_x~p^x(1-p)^{n-x}$; however, maximizing this is equivalent to maximizing $p^x(1-p)^{n-x}$ for a fixed $x$. Actually, the likelihood for the gaussian and poisson also do not involve their leading constants, so this case is just like those as w Addressing OPs Comment Here is a bit more detail: First, $x$ is the total number of successes whereas $x_i$ is a single trial (0 or 1). Therefore: $$\prod_{i=1}^np^{x_i}(1-p)^{1-x_i} = p^{\sum_1^n x_i}(1-p)^{\sum_1^n1-x_i} = p^{x}(1-p)^{n-x}$$ That shows how you get the factors in the likelihood (by running the above steps backwards). Why does the constant go away? Informally, and what most people do (including me), is just notice that the leading constant does not affect the value of $p$ that maximizes the likelihood, so we just ignore it (effectively set it to 1). We can derive this by taking the log of the likelihood function and finding where its derivative is zero: $$\ln\left(nC_x~p^x(1-p)^{n-x}\right) = \ln(nC_x)+x\ln(p)+(n-x)\ln(1-p)$$ Take derivative wrt $p$ and set to $0$: $$\frac{d}{dp}\ln(nC_x)+x\ln(p)+(n-x)\ln(1-p) = \frac{x}{p}- \frac{n-x}{1-p} = 0$$ $$\implies \frac{n}{x} = \frac{1}{p} \implies p = \frac{x}{n}$$ Notice that the leading constant dropped out of the calculation of the MLE. More philosophically, a likelihood is only meaningful for inference up to a multiplying constant, such that if we have two likelihood functions $L_1,L_2$ and $L_1=kL_2$, then they are inferentially equivalent. This is called the Law of Likelihood . Therefore, if we are comparing different values of $p$ using the same likelihood function, the leading term becomes irrelevant. At a practical level, inference using the likelihood function is actually based on the likelihood ratio, not the absolute value of the likelihood. This is due to the asymptotic theory of likelihood ratios (which are asymptotically chi-square -- subject to certain regularity conditions that are often appropriate). Likelihood ratio tests are favored due to the Neyman-Pearson Lemma . Therefore, when we attempt to test two simple hypotheses, we will take the ratio and the common leading factor will cancel. NOTE: This will not happen if you were comparing two different models, say a binomial and a poisson. In that case, the constants are important. Of the above reasons, the first (irrelevance to finding the maximizer of L) most directly answers your question.
{ "source": [ "https://stats.stackexchange.com/questions/181035", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86947/" ] }
181,113
Specifically, I want to know if there is a difference between lm(y ~ x1 + x2) and glm(y ~ x1 + x2, family=gaussian) . I think that this particular case of glm is equal to lm. Am I wrong?
While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian) ), regression and GLMs are the same model, the title question asks something slightly more general: Is there any difference between lm and glm for the gaussian family of glm? To which the answer is "Yes!". The reason that they can be different is because you can also specify a link function in the GLM. This allows you to fit particular forms of nonlinear relationship between $y$ (or rather its conditional mean) and the $x$ -variables; while you can do this in nls as well, there's no need for starting values, sometimes the convergence is better (also the syntax is a bit easier). Compare, for example, these models (you have R so I assume you can run these yourself): x1=c(56.1, 26.8, 23.9, 46.8, 34.8, 42.1, 22.9, 55.5, 56.1, 46.9, 26.7, 33.9, 37.0, 57.6, 27.2, 25.7, 37.0, 44.4, 44.7, 67.2, 48.7, 20.4, 45.2, 22.4, 23.2, 39.9, 51.3, 24.1, 56.3, 58.9, 62.2, 37.7, 36.0, 63.9, 62.5, 44.1, 46.9, 45.4, 23.7, 36.5, 56.1, 69.6, 40.3, 26.2, 67.1, 33.8, 29.9, 25.7, 40.0, 27.5) x2=c(12.29, 11.42, 13.59, 8.64, 12.77, 9.9, 13.2, 7.34, 10.67, 18.8, 9.84, 16.72, 10.32, 13.67, 7.65, 9.44, 14.52, 8.24, 14.14, 17.2, 16.21, 6.01, 14.23, 15.63, 10.83, 13.39, 10.5, 10.01, 13.56, 11.26, 4.8, 9.59, 11.87, 11, 12.02, 10.9, 9.5, 10.63, 19.03, 16.71, 15.11, 7.22, 12.6, 15.35, 8.77, 9.81, 9.49, 15.82, 10.94, 6.53) y = c(1.54, 0.81, 1.39, 1.09, 1.3, 1.16, 0.95, 1.29, 1.35, 1.86, 1.1, 0.96, 1.03, 1.8, 0.7, 0.88, 1.24, 0.94, 1.41, 2.13, 1.63, 0.78, 1.55, 1.5, 0.96, 1.21, 1.4, 0.66, 1.55, 1.37, 1.19, 0.88, 0.97, 1.56, 1.51, 1.09, 1.23, 1.2, 1.62, 1.52, 1.64, 1.77, 0.97, 1.12, 1.48, 0.83, 1.06, 1.1, 1.21, 0.75) lm(y ~ x1 + x2) glm(y ~ x1 + x2, family=gaussian) glm(y ~ x1 + x2, family=gaussian(link="log")) nls(y ~ exp(b0+b1*x1+b2*x2), start=list(b0=-1,b1=0.01,b2=0.1)) Note that the first pair are the same model ( $y_i \sim N(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i},\sigma^2)\,$ ), and the second pair are the same model ( $y_i \sim N(\exp(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}),\sigma^2)\,$ and the fits are essentially the same within each pair. So - in relation to the title question - you can fit a substantially wider variety of Gaussian models with a GLM than with regression.
{ "source": [ "https://stats.stackexchange.com/questions/181113", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/94684/" ] }
181,121
Let's say I have some PDF $D$ that I can sample from. I would instead like to sample values from $Inv(D)$. I have that the $D$ is only positive within the bounds $[a,b]$ and is bounded above and below by $[c,d]$. That is, the domain of $D$ is a subset of $[a,b]$ and the range of $D$ is a subset of $[c,d]$. How can I go about sampling from $Inv(D)$? I'm not sure that this is the standard use of "inverse" with respect to a distribution, so the image below should give an idea of what I mean. I can readily sample from the shaded region in the top distribution, but what I want is to sample from the shaded region in the bottom distribution. Update: I am looking to sample a single value from this inverse distribution at random intervals. My application is a setting wherein I have an agent exploring a domain. The agent models the areas of the value space that it has already explored, currently with a maximum-likelihood Gaussian of the values the agent has observed. Thus to pick a value of an unexplored area, the agent needs to periodically sample from the "inverse" of this Gaussian. Right now I'm just sampling from a uniform distribution and seeing decent results, so speed is more important to me than accuracy.
While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian) ), regression and GLMs are the same model, the title question asks something slightly more general: Is there any difference between lm and glm for the gaussian family of glm? To which the answer is "Yes!". The reason that they can be different is because you can also specify a link function in the GLM. This allows you to fit particular forms of nonlinear relationship between $y$ (or rather its conditional mean) and the $x$ -variables; while you can do this in nls as well, there's no need for starting values, sometimes the convergence is better (also the syntax is a bit easier). Compare, for example, these models (you have R so I assume you can run these yourself): x1=c(56.1, 26.8, 23.9, 46.8, 34.8, 42.1, 22.9, 55.5, 56.1, 46.9, 26.7, 33.9, 37.0, 57.6, 27.2, 25.7, 37.0, 44.4, 44.7, 67.2, 48.7, 20.4, 45.2, 22.4, 23.2, 39.9, 51.3, 24.1, 56.3, 58.9, 62.2, 37.7, 36.0, 63.9, 62.5, 44.1, 46.9, 45.4, 23.7, 36.5, 56.1, 69.6, 40.3, 26.2, 67.1, 33.8, 29.9, 25.7, 40.0, 27.5) x2=c(12.29, 11.42, 13.59, 8.64, 12.77, 9.9, 13.2, 7.34, 10.67, 18.8, 9.84, 16.72, 10.32, 13.67, 7.65, 9.44, 14.52, 8.24, 14.14, 17.2, 16.21, 6.01, 14.23, 15.63, 10.83, 13.39, 10.5, 10.01, 13.56, 11.26, 4.8, 9.59, 11.87, 11, 12.02, 10.9, 9.5, 10.63, 19.03, 16.71, 15.11, 7.22, 12.6, 15.35, 8.77, 9.81, 9.49, 15.82, 10.94, 6.53) y = c(1.54, 0.81, 1.39, 1.09, 1.3, 1.16, 0.95, 1.29, 1.35, 1.86, 1.1, 0.96, 1.03, 1.8, 0.7, 0.88, 1.24, 0.94, 1.41, 2.13, 1.63, 0.78, 1.55, 1.5, 0.96, 1.21, 1.4, 0.66, 1.55, 1.37, 1.19, 0.88, 0.97, 1.56, 1.51, 1.09, 1.23, 1.2, 1.62, 1.52, 1.64, 1.77, 0.97, 1.12, 1.48, 0.83, 1.06, 1.1, 1.21, 0.75) lm(y ~ x1 + x2) glm(y ~ x1 + x2, family=gaussian) glm(y ~ x1 + x2, family=gaussian(link="log")) nls(y ~ exp(b0+b1*x1+b2*x2), start=list(b0=-1,b1=0.01,b2=0.1)) Note that the first pair are the same model ( $y_i \sim N(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i},\sigma^2)\,$ ), and the second pair are the same model ( $y_i \sim N(\exp(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}),\sigma^2)\,$ and the fits are essentially the same within each pair. So - in relation to the title question - you can fit a substantially wider variety of Gaussian models with a GLM than with regression.
{ "source": [ "https://stats.stackexchange.com/questions/181121", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/94694/" ] }
181,138
I have the following python code x = [100.0 for _ in range(1000)] for i in range(1, len(x)): x[i] = x[i-1] + (2*randint(0, 1)-1)*step_size import numpy as np print np.corrcoef(x[:-1], x[1:]) I am getting extremely high correlation values, over 99.9%, more precisely [[ 1. 0.99785636] [ 0.99785636 1. ]] Does anyone know what's going on? How can I get such large autocorrelation?
While for the specific form of model mentioned in the body of the question (i.e. lm(y ~ x1 + x2) vs glm(y ~ x1 + x2, family=gaussian) ), regression and GLMs are the same model, the title question asks something slightly more general: Is there any difference between lm and glm for the gaussian family of glm? To which the answer is "Yes!". The reason that they can be different is because you can also specify a link function in the GLM. This allows you to fit particular forms of nonlinear relationship between $y$ (or rather its conditional mean) and the $x$ -variables; while you can do this in nls as well, there's no need for starting values, sometimes the convergence is better (also the syntax is a bit easier). Compare, for example, these models (you have R so I assume you can run these yourself): x1=c(56.1, 26.8, 23.9, 46.8, 34.8, 42.1, 22.9, 55.5, 56.1, 46.9, 26.7, 33.9, 37.0, 57.6, 27.2, 25.7, 37.0, 44.4, 44.7, 67.2, 48.7, 20.4, 45.2, 22.4, 23.2, 39.9, 51.3, 24.1, 56.3, 58.9, 62.2, 37.7, 36.0, 63.9, 62.5, 44.1, 46.9, 45.4, 23.7, 36.5, 56.1, 69.6, 40.3, 26.2, 67.1, 33.8, 29.9, 25.7, 40.0, 27.5) x2=c(12.29, 11.42, 13.59, 8.64, 12.77, 9.9, 13.2, 7.34, 10.67, 18.8, 9.84, 16.72, 10.32, 13.67, 7.65, 9.44, 14.52, 8.24, 14.14, 17.2, 16.21, 6.01, 14.23, 15.63, 10.83, 13.39, 10.5, 10.01, 13.56, 11.26, 4.8, 9.59, 11.87, 11, 12.02, 10.9, 9.5, 10.63, 19.03, 16.71, 15.11, 7.22, 12.6, 15.35, 8.77, 9.81, 9.49, 15.82, 10.94, 6.53) y = c(1.54, 0.81, 1.39, 1.09, 1.3, 1.16, 0.95, 1.29, 1.35, 1.86, 1.1, 0.96, 1.03, 1.8, 0.7, 0.88, 1.24, 0.94, 1.41, 2.13, 1.63, 0.78, 1.55, 1.5, 0.96, 1.21, 1.4, 0.66, 1.55, 1.37, 1.19, 0.88, 0.97, 1.56, 1.51, 1.09, 1.23, 1.2, 1.62, 1.52, 1.64, 1.77, 0.97, 1.12, 1.48, 0.83, 1.06, 1.1, 1.21, 0.75) lm(y ~ x1 + x2) glm(y ~ x1 + x2, family=gaussian) glm(y ~ x1 + x2, family=gaussian(link="log")) nls(y ~ exp(b0+b1*x1+b2*x2), start=list(b0=-1,b1=0.01,b2=0.1)) Note that the first pair are the same model ( $y_i \sim N(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i},\sigma^2)\,$ ), and the second pair are the same model ( $y_i \sim N(\exp(\beta_0+\beta_1 x_{1i}+\beta_2 x_{2i}),\sigma^2)\,$ and the fits are essentially the same within each pair. So - in relation to the title question - you can fit a substantially wider variety of Gaussian models with a GLM than with regression.
{ "source": [ "https://stats.stackexchange.com/questions/181138", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/55799/" ] }
181,611
Since one can calculate confidence intervals for p-values and since the opposite of interval estimation is point estimation: Is p-value a point estimate?
Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample standard deviation the p-value is not an useful estimator of an interesting distribution parameter. Look at the answer by @whuber for technical details. The p-value for a test-statistic gives the probability of observing a deviation from the expected value of the test-statistic as least as large as observed in the sample, calculated under the assumption that the null hypothesis is true. If you have the entire distribution it is either consistent with the null hypothesis, or it is not. This can be described with by indicator variable (again, see the answer by @whuber). But the p-value cannot be used as an useful estimator of the indicator variable because it is not consistent as the p-value does not converge as the sample size increases if the null hypothesis is true. This is a pretty complicated alternate way of stating that a statistical test can either reject or fail to reject the null, but never confirm it.
{ "source": [ "https://stats.stackexchange.com/questions/181611", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25626/" ] }
181,629
When training a neural network using the back-propagation algorithm, the gradient descent method is used to determine the weight updates. My question is: Rather than using gradient descent method to slowly locate the minimum point with respect to a certain weight, why don't we just set the derivative $\frac{d(\text{Error})}{dw}=0$, and find the value of weight $w$ which minimizes the error? Also, why are we sure that the error function in back-propagation will be a minimum? Can't it turn out the error function is a maximum instead? Is there a specific property of the squashing functions that guarantees that a network with any number of hidden nodes with arbitrary weights and input vectors will always give an error function that has some minima?
Because we can't. The optimization surface $S(\mathbf{w})$ as a function of the weights $\mathbf{w}$ is nonlinear and no closed form solution exists for $\frac{d S(\mathbf{w})}{d\mathbf{w}}=0$. Gradient descent, by definition, descends. If you reach a stationary point after descending, it has to be a (local) minimum or a saddle point, but never a local maximum.
{ "source": [ "https://stats.stackexchange.com/questions/181629", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86275/" ] }
182,098
I have read in a number of references that the Lasso estimate for the regression parameter vector $B$ is equivalent to the posterior mode of $B$ in which the prior distribution for each $B_i$ is a double exponential distribution (also known as Laplace distribution). I have been trying to prove this, can someone flesh out the details?
For simplicity let's just consider a single observation of a variable $Y$ such that $$Y|\mu, \sigma^2 \sim N(\mu, \sigma^2),$$ $\mu \sim \mbox{Laplace}(\lambda)$ and the improper prior $f(\sigma) \propto \mathbb{1}_{\sigma>0}$. Then the joint density of $Y, \mu, \sigma^2$ is proportional to $$ f(Y, \mu, \sigma^2 | \lambda) \propto \frac{1}{\sigma}\exp \left(-\frac{(y-\mu)^2}{\sigma^2} \right) \times 2\lambda e^{-\lambda \vert \mu \vert}. $$ Taking a log and discarding terms that do not involve $\mu$, $$ \log f(Y, \mu, \sigma^2) = -\frac{1}{\sigma^2} \Vert y-\mu\Vert_2^2 -\lambda \vert \mu \vert. \quad (1)$$ Thus the maximum of (1) will be a MAP estimate and is indeed the Lasso problem after we reparametrize $\tilde \lambda = \lambda \sigma^2$. The extension to regression is clear--replace $\mu$ with $X\beta$ in the Normal likelihood, and set the prior on $\beta$ to be a sequence of independent laplace$(\lambda)$ distributions.
{ "source": [ "https://stats.stackexchange.com/questions/182098", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36049/" ] }
182,099
I have an introductory-graduate-level statistics background (assume I know mathematical statistics and probability at an undergraduate level (e.g., Wackerly et al., Ross' Probability), and have some knowledge of measure theory). I have recently started a job doing experimental design and statistical reporting in education statistics, and have been placed on a project where I am basically assessing accountability metrics for schools and have to analyze the data, propose changes, etc. Note that I am the only one in my department with a mathematical statistics background. In my position, people have strongly suggested using effect size to measure effectiveness of programs. The only time I've ever heard of effect size is from my friend, who studied psychology. My impression is that $$\text{Effect Size} = \dfrac{\text{Difference of Means}}{\text{Standard Deviation}}\text{.}$$ What is so useful about this metric over traditional hypothesis testing, and why should I care about it? To me, it looks like nothing more than a test statistic for a two-sample $t$-test. I don't see this useful at all apart from maybe putting everything on the same scale (which is why anyone really "normalizes" anything), but I thought test statistics (which is what effect size seems like to me) were out of fashion, and $p$-values are preferred.
That is one measure of effect size, but there are many others. It is certainly not the $t$ test statistic. Your measure of effect size is often called Cohen's $d$ (strictly speaking that is correct only if the SD is estimated via MLE—i.e., without Bessel's correction ); more generically, it is called the 'standardized mean difference'. Perhaps this will make it clearer that $t\ne d$: \begin{align} d &= \frac{\bar x_2 - \bar x_1}{SD} \\[10pt] &\ne \\[10pt] t &= \frac{\bar x_2 - \bar x_1}{SE} \\[10pt] t &= \frac{\bar x_2 - \bar x_1}{\frac{SD}{\sqrt N}} \\ \end{align} That is, the "$/\sqrt N$" is missing from the formula for the standardized mean difference. More generally, taking the sample size out of the value provides real information. Assuming the true effect is not exactly $0$ to infinite decimal places, you can achieve any level of significance you might like with sufficient $N$. The $p$-value provides information about how confident we can be in rejecting the null hypothesis, but does so by conflating how big the effect is with how much data you have. It is certainly nice to know if we should reject the null hypothesis, but it would also be nice to know if the effect of your educational intervention produces large gains for schoolchildren or is trivial and was only significant due to large $N$.
{ "source": [ "https://stats.stackexchange.com/questions/182099", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/46427/" ] }
182,102
I understand the convolutional and pooling layers, but I cannot see the reason for a fully connected layer in CNNs. Why isn't the previous layer directly connected to the output layer?
The output from the convolutional layers represents high-level features in the data. While that output could be flattened and connected to the output layer, adding a fully-connected layer is a (usually) cheap way of learning non-linear combinations of these features. Essentially the convolutional layers are providing a meaningful, low-dimensional, and somewhat invariant feature space, and the fully-connected layer is learning a (possibly non-linear) function in that space. NOTE: It is trivial to convert from FC layers to Conv layers. Converting these top FC layers to Conv layers can be helpful as this page describes.
{ "source": [ "https://stats.stackexchange.com/questions/182102", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44001/" ] }
182,329
The data has many features (e.g. 100) and the number of instances is like 100,000. The data is sparse. I want to fit the data using logistic regression or svm. How do I know whether features are linear or non-linear so that I can use kernel trick if non-linear?
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to find whether they are linearly separable: Linear programming: Defines an objective function subjected to constraints that satisfy linear separability. You can find detail about implementation here . Perceptron method: A perceptron is guaranteed to converge if the data is linearly separable. Quadratic programming: Quadratic programming optimisation objective function can be defined with constraint as in SVM. Computational geometry: If one can find two disjoint convex hulls then the data is linearly separable Clustering method: If one can find two clusters with cluster purity of 100% using some clustering methods such as k-means, then the data is linearly separable. (1): Elizondo, D., "The linear separability problem: some testing methods," in Neural Networks, IEEE Transactions on , vol.17, no.2, pp.330-344, March 2006 doi: 10.1109/TNN.2005.860871
{ "source": [ "https://stats.stackexchange.com/questions/182329", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95466/" ] }
182,488
I'm working on learning probability and statistics by reading a few books and writing some code, and while simulating coin flips I noticed something that struck me as slightly counter to one's naive intuition. If you flip a fair coin $n$ times, the ratio of heads to tails converges towards 1 as $n$ increases, exactly as you would expect. But on the other hand, as $n$ increases, it appears that you become less likely to flip the exact same number of heads as tails, thereby getting a ratio of exactly 1. For example (some output from my program) For 100 flips, it took 27 experiments until we got an exact match (50 HEADS, 50 TAILS) For 500 flips, it took 27 experiments until we got an exact match (250 HEADS, 250 TAILS) For 1000 flips, it took 11 experiments until we got an exact match (500 HEADS, 500 TAILS) For 5000 flips, it took 31 experiments until we got an exact match (2500 HEADS, 2500 TAILS) For 10000 flips, it took 38 experiments until we got an exact match (5000 HEADS, 5000 TAILS) For 20000 flips, it took 69 experiments until we got an exact match (10000 HEADS, 10000 TAILS) For 80000 flips, it took 5 experiments until we got an exact match (40000 HEADS, 40000 TAILS) For 100000 flips, it took 86 experiments until we got an exact match (50000 HEADS, 50000 TAILS) For 200000 flips, it took 96 experiments until we got an exact match (100000 HEADS, 100000 TAILS) For 500000 flips, it took 637 experiments until we got an exact match (250000 HEADS, 250000 TAILS) For 1000000 flips, it took 3009 experiments until we got an exact match (500000 HEADS, 500000 TAILS) My question is this: is there a concept / principle in statistics / probability theory that explains this? If so, what principle / concept is it? Link to code if anyone is interested in seeing how I generated this. -- edit -- For what it's worth, here's how I was explaining this to myself earlier. If you flip a fair coin $\mathtt n$ times and count the number of heads, you're basically generating a random number. Likewise if you do the same thing and count the tails, you're also generating a random number. So if you count both, you're really generating two random numbers, and as $\mathtt n$ gets larger, the random numbers are getting larger. And the larger the random numbers you generate, the more chances there are for them to "miss" each other. What makes this interesting is that the two numbers are actually linked in a sense, with their ratio converging towards one as they get bigger, even though each number is random in isolation. Maybe it's just me, but I find that sort of neat.
Note that the case where the number of heads and the number of tails are equal is the same as "exactly half the time you get heads". So let's stick to counting the number of heads to see if it's half the number of tosses or equivalently comparing the proportion of heads with 0.5. The more you flip, the larger the number of possible counts of heads you can have -- the distribution becomes more spread out (e.g. an interval for the number of heads containing 95% of the probability will grow wider as the number of tosses increases), so the probability of exactly half heads will tend to go down as we toss more. Correspondingly, the proportion of heads will take more possible values; see here, where we move from 100 tosses to 200 tosses: With 100 tosses we can observe a proportion of 0.49 heads or 0.50 heads or 0.51 heads (and so on -- but nothing in between those values), but with 200 tosses, we can observe 0.49 or 0.495 or 0.50 or 0.505 or 0.510 - the probability has more values to "cover" and so each will tend to get a smaller share. Consider than you have $2n$ tosses with some probability $p_i$ of getting $i$ heads (we know these probabilities but it's not critical for this part), and you add two more tosses. In $2n$ tosses, $n$ heads is the most likely outcome ($p_n>p_{n\pm 1}$ and it goes down from there). What's the chance of having $n+1$ heads in $2n+2$ tosses? (Label these probabilities with $q$ so we don't confuse them with the previous ones; also let P(HH) be the probability of "Head,Head" in the next two tosses, and so on) $q_{n+1} = p_{n-1} P(HH) + p_n (P(HT)+P(TH)) + p_{n+1} P(TT)$ $\qquad < p_{n} P(HH) + p_n (P(HT)+P(TH)) + p_{n} P(TT) = p_n$ i.e. if you add two more coin-tosses, the probability of the middle value naturally goes down because it averages the most likely (middle) value with the average of the smaller values either side) So as long as you're comfortable that the peak will be in the middle (for $2n= 2,4,6,...$), the probability of exactly half heads must decrease as $n$ goes up. In fact we can show that for large $n$, $p_n$ decreases proportionally with $\frac{1}{\sqrt{n}}$ (unsurprisingly, since the distribution of the standardized number of heads approaches normality and the variance of the proportion of heads decreases with $n$). As requested, here's R code that produces something close to the above plot: x1 = 25:75 x2 = 50:150 plot(x1 / 100, dbinom(x1, 100, 0.5), type = "h", main = "Proportion of heads in 100 and 200 tosses", xlab = "Proportion of heads", ylab = "probability") points(x2 / 200, dbinom(x2, 200, 0.5), type = "h", col = 3)
{ "source": [ "https://stats.stackexchange.com/questions/182488", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/115/" ] }
182,734
I haven't seen the question stated precisely in these terms, and this is why I make a new question. What I am interested in knowing is not the definition of a neural network, but understanding the actual difference with a deep neural network. For more context: I know what a neural network is and how backpropagation works. I know that a DNN must have multiple hidden layers. However, 10 years ago in class I learned that having several layers or one layer (not counting the input and output layers) was equivalent in terms of the functions a neural network is able to represent (see Cybenko's Universal approximation theorem ), and that having more layers made it more complex to analyse without gain in performance. Obviously, that is not the case anymore. I suppose, maybe wrongly, that the differences are in terms of training algorithm and properties rather than structure, and therefore I would really appreciate if the answer could underline the reasons that made the move to DNN possible (e.g. mathematical proof or randomly playing with networks?) and desirable (e.g. speed of convergence?)
Let's start with a triviliaty: Deep neural network is simply a feedforward network with many hidden layers. This is more or less all there is to say about the definition. Neural networks can be recurrent or feedforward; feedforward ones do not have any loops in their graph and can be organized in layers. If there are "many" layers, then we say that the network is deep . How many layers does a network have to have in order to qualify as deep? There is no definite answer to this (it's a bit like asking how many grains make a heap ), but usually having two or more hidden layers counts as deep. In contrast, a network with only a single hidden layer is conventionally called "shallow". I suspect that there will be some inflation going on here, and in ten years people might think that anything with less than, say, ten layers is shallow and suitable only for kindergarten exercises. Informally, "deep" suggests that the network is tough to handle. Here is an illustration, adapted from here : But the real question you are asking is, of course, Why would having many layers be beneficial? I think that the somewhat astonishing answer is that nobody really knows. There are some common explanations that I will briefly review below, but none of them has been convincingly demonstrated to be true, and one cannot even be sure that having many layers is really beneficial. I say that this is astonishing, because deep learning is massively popular, is breaking all the records (from image recognition, to playing Go, to automatic translation, etc.) every year, is getting used by the industry, etc. etc. And we are still not quite sure why it works so well. I base my discussion on the Deep Learning book by Goodfellow, Bengio, and Courville which went out in 2017 and is widely considered to be the book on deep learning. (It's freely available online.) The relevant section is 6.4.1 Universal Approximation Properties and Depth . You wrote that 10 years ago in class I learned that having several layers or one layer (not counting the input and output layers) was equivalent in terms of the functions a neural network is able to represent [...] You must be referring to the so called Universal approximation theorem , proved by Cybenko in 1989 and generalized by various people in the 1990s. It basically says that a shallow neural network (with 1 hidden layer) can approximate any function, i.e. can in principle learn anything . This is true for various nonlinear activation functions, including rectified linear units that most neural networks are using today (the textbook references Leshno et al. 1993 for this result). If so, then why is everybody using deep nets? Well, a naive answer is that because they work better. Here is a figure from the Deep Learning book showing that it helps to have more layers in one particular task, but the same phenomenon is often observed across various tasks and domains: We know that a shallow network could perform as good as the deeper ones. But it does not; and they usually do not. The question is --- why? Possible answers: Maybe a shallow network would need more neurons then the deep one? Maybe a shallow network is more difficult to train with our current algorithms (e.g. it has more nasty local minima, or the convergence rate is slower, or whatever)? Maybe a shallow architecture does not fit to the kind of problems we are usually trying to solve (e.g. object recognition is a quintessential "deep", hierarchical process)? Something else? The Deep Learning book argues for bullet points #1 and #3. First, it argues that the number of units in a shallow network grows exponentially with task complexity. So in order to be useful a shallow network might need to be very big; possibly much bigger than a deep network. This is based on a number of papers proving that shallow networks would in some cases need exponentially many neurons; but whether e.g. MNIST classification or Go playing are such cases is not really clear. Second, the book says this: Choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. This can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. I think the current "consensus" is that it's a combination of bullet points #1 and #3: for real-world tasks deep architecture are often beneficial and shallow architecture would be inefficient and require a lot more neurons for the same performance. But it's far from proven. Consider e.g. Zagoruyko and Komodakis, 2016, Wide Residual Networks . Residual networks with 150+ layers appeared in 2015 and won various image recognition contests. This was a big success and looked like a compelling argument in favour of deepness; here is one figure from a presentation by the first author on the residual network paper (note that the time confusingly goes to the left here): But the paper linked above shows that a "wide" residual network with "only" 16 layers can outperform "deep" ones with 150+ layers. If this is true, then the whole point of the above figure breaks down. Or consider Ba and Caruana, 2014, Do Deep Nets Really Need to be Deep? : In this paper we provide empirical evidence that shallow nets are capable of learning the same function as deep nets, and in some cases with the same number of parameters as the deep nets. We do this by first training a state-of-the-art deep model, and then training a shallow model to mimic the deep model. The mimic model is trained using the model compression scheme described in the next section. Remarkably, with model compression we are able to train shallow nets to be as accurate as some deep models, even though we are not able to train these shallow nets to be as accurate as the deep nets when the shallow nets are trained directly on the original labeled training data. If a shallow net with the same number of parameters as a deep net can learn to mimic a deep net with high fidelity, then it is clear that the function learned by that deep net does not really have to be deep. If true, this would mean that the correct explanation is rather my bullet #2, and not #1 or #3. As I said --- nobody really knows for sure yet. Concluding remarks The amount of progress achieved in the deep learning over the last ~10 years is truly amazing, but most of this progress was achieved by trial and error, and we still lack very basic understanding about what exactly makes deep nets to work so well. Even the list of things that people consider to be crucial for setting up an effective deep network seems to change every couple of years. The deep learning renaissance started in 2006 when Geoffrey Hinton (who had been working on neural networks for 20+ years without much interest from anybody) published a couple of breakthrough papers offering an effective way to train deep networks ( Science paper , Neural computation paper ). The trick was to use unsupervised pre-training before starting the gradient descent. These papers revolutionized the field, and for a couple of years people thought that unsupervised pre-training was the key. Then in 2010 Martens showed that deep neural networks can be trained with second-order methods (so called Hessian-free methods) and can outperform networks trained with pre-training: Deep learning via Hessian-free optimization . Then in 2013 Sutskever et al. showed that stochastic gradient descent with some very clever tricks can outperform Hessian-free methods: On the importance of initialization and momentum in deep learning . Also, around 2010 people realized that using rectified linear units instead of sigmoid units makes a huge difference for gradient descent. Dropout appeared in 2014. Residual networks appeared in 2015. People keep coming up with more and more effective ways to train deep networks and what seemed like a key insight 10 years ago is often considered a nuisance today. All of that is largely driven by trial and error and there is little understanding of what makes some things work so well and some other things not. Training deep networks is like a big bag of tricks. Successful tricks are usually rationalized post factum. We don't even know why deep networks reach a performance plateau; just 10 years people used to blame local minima, but the current thinking is that this is not the point (when the perfomance plateaus, the gradients tend to stay large). This is such a basic question about deep networks, and we don't even know this . Update: This is more or less the subject of Ali Rahimi's NIPS 2017 talk on machine learning as alchemy: https://www.youtube.com/watch?v=Qi1Yry33TQE . [This answer was entirely re-written in April 2017, so some of the comments below do not apply anymore.]
{ "source": [ "https://stats.stackexchange.com/questions/182734", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/49548/" ] }
182,775
In many neural network libraries, there are 'embedding layers', like in Keras or Lasagne . I am not sure I understand its function, despite reading the documentation. For example, in the Keras documentation it says: Turn positive integers (indexes) into denses vectors of fixed size, eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]] Could a knowledgeable person explain what it does, and when you would use it? EDIT: Regarding pasting in documentation, there is not much to paste from the documentation, hence my question. I don't understand the transformation it does, nor why it should be used. Anyway, this is how it's explained in Keras: Embedding keras.layers.embeddings.Embedding(input_dim, output_dim, init='uniform', input_length=None, weights=None, W_regularizer=None, W_constraint=None, mask_zero=False) Turn positive integers (indexes) into denses vectors of fixed size, eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]] Input shape: 2D tensor with shape: (nb_samples, sequence_length). Output shape: 3D tensor with shape: (nb_samples, sequence_length, output_dim). Arguments: input_dim: int >= 0. Size of the vocabulary, ie. 1+maximum integer index occurring in the input data. output_dim: int >= 0. Dimension of the dense embedding And here it's how it's explained in Lasagne: A layer for word embeddings. The input should be an integer type Tensor variable. Parameters: incoming : a Layer instance or a tuple The layer feeding into this layer, or the expected input shape. input_size: int The Number of different embeddings. The last embedding will have index input_size - 1. output_size : int The size of each embedding. W : Theano shared variable, expression, numpy array or callable Initial value, expression or initializer for the embedding matrix. This should be a matrix with shape (input_size, output_size). See lasagne.utils.create_param() for more information. Examples >>> from lasagne.layers import EmbeddingLayer, InputLayer, get_output >>> import theano >>> x = T.imatrix() >>> l_in = InputLayer((3, )) >>> W = np.arange(3*5).reshape((3, 5)).astype('float32') >>> l1 = EmbeddingLayer(l_in, input_size=3, output_size=5, W=W) >>> output = get_output(l1, x) >>> f = theano.function([x], output) >>> x_test = np.array([[0, 2], [1, 2]]).astype('int32') >>> f(x_test) array([[[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]], [[ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.]]], dtype=float32)
Relation to Word2Vec ========================================== Word2Vec in a simple picture: (source: netdna-ssl.com ) More in-depth explanation: I believe it's related to the recent Word2Vec innovation in natural language processing. Roughly, Word2Vec means our vocabulary is discrete and we will learn an map which will embed each word into a continuous vector space. Using this vector space representation will allow us to have a continuous, distributed representation of our vocabulary words. If for example our dataset consists of n-grams, we may now use our continuous word features to create a distributed representation of our n-grams. In the process of training a language model we will learn this word embedding map. The hope is that by using a continuous representation, our embedding will map similar words to similar regions. For example in the landmark paper Distributed Representations of Words and Phrases and their Compositionality , observe in Tables 6 and 7 that certain phrases have very good nearest neighbour phrases from a semantic point of view. Transforming into this continuous space allows us to use continuous metric notions of similarity to evaluate the semantic quality of our embedding. Explanation using Lasagne code Let's break down the Lasagne code snippet: x = T.imatrix() x is a matrix of integers. Okay, no problem. Each word in the vocabulary can be represented an integer, or a 1-hot sparse encoding. So if x is 2x2, we have two datapoints, each being a 2-gram. l_in = InputLayer((3, )) The input layer. The 3 represents the size of our vocabulary. So we have words $w_0, w_1, w_2$ for example. W = np.arange(3*5).reshape((3, 5)).astype('float32') This is our word embedding matrix. It is a 3 row by 5 column matrix with entries 0 to 14. Up until now we have the following interpretation. Our vocabulary has 3 words and we will embed our words into a 5 dimensional vector space. For example, we may represent one word $w_0 = (1,0,0)$ , and another word $w_1 = (0, 1, 0)$ and the other word $w_2 = (0, 0, 1)$ , e.g. as hot sparse encodings. We can view the $W$ matrix as embedding these words via matrix multiplication. Therefore the first word $w_0 \rightarrow w_0W = [0, 1, 2, 3, 4].$ Simmilarly $w_1 \rightarrow w_1W = [5, 6, 7, 8, 9]$ . It should be noted, due to the one-hot sparse encoding we are using, you also see this referred to as table lookups. l1 = EmbeddingLayer(l_in, input_size=3, output_size=5, W=W) The embedding layer output = get_output(l1, x) Symbolic Theano expression for the embedding. f = theano.function([x], output) Theano function which computes the embedding. x_test = np.array([[0, 2], [1, 2]]).astype('int32') It's worth pausing here to discuss what exactly x_test means. First notice that all of x_test entries are in {0, 1, 2}, i.e. range(3). x_test has 2 datapoints. The first datapoint [0, 2] represents the 2-gram $(w_0, w_2)$ and the second datapoint represents the 2-gram $(w_1, w_2)$ . We wish to embed our 2-grams using our word embedding layer now. Before we do that, let's make sure we're clear about what should be returned by our embedding function f. The 2 gram $(w_0, w_2)$ is equivalent to a [[1, 0, 0], [0, 0, 1]] matrix. Applying our embedding matrix W to this sparse matrix should yield: [[0, 1, 2, 3, 4], [10, 11, 12, 13, 14]]. Note in order to have the matrix multiplication work out, we have to apply the word embedding matrix $W$ via right multiplication to the sparse matrix representation of our 2-gram. f(x_test) returns: array([[[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]], [[ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.]]], dtype=float32) To convince you that the 3 does indeed represent the vocabulary size, try inputting a matrix x_test = [[5, 0], [1, 2]] . You will see that it raises a matrix mis-match error.
{ "source": [ "https://stats.stackexchange.com/questions/182775", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/83492/" ] }
182,970
I was reading this article and I'm curious for the proper answer to this question. The only thing that comes to my mind it's perhaps that in some countries the decimal separator is a comma, and it may be problems when sharing data in CSV , but I'm not really sure of my answer.
CSV format specification is defined in RFC 4180 . This specification was published because there is no formal specification in existence, which allows for a wide variety of interpretations of CSV files Unfortunately, since 2005 (date of publishing the RFC), nothing has changed. We still have a wide variety of implementations. The general approach defined in RFC 4180 is to enclose fields containing characters such as commas in quotation marks, this recommendation however is not always meet by different software. The problem is that in various European locales comma character serves as the decimal point, so you write 0,005 instead of 0.005 . Yet in other cases, commas are used instead of spaces to signal digit groups, e.g. 4,000,000.00 (see here ). In both cases using commas would possibly lead to errors in reading data from csv files because your software does not really know if 0,005, 0,1 are two numbers or four different numbers (see example here ). Last but not least, if you store text in your data file, then commas are much more common in text than, for example, semicolons, so if your text is not enclosed in quotation marks, that such data can also be easily read with errors. Nothing makes commas better, or worse field separators as far as CSV files are used in accordance with recommendations as RFC 4180 that guard from the problems described above. However if there is a risk of using the simplified CSV format that does not enclose fields in quotation marks, or the recommendation could be used inconsistently, then other separators (e.g. semicolon) seem to be safer approach.
{ "source": [ "https://stats.stackexchange.com/questions/182970", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95306/" ] }
183,006
Maximum likelihood estimation often results into biased estimators (e.g., its estimate for the sample variance is biased for the Gaussian distribution). What then makes it so popular? Why exactly is it used so much? Also, what in particular makes it better than the alternative approach -- method of moments? Also, I noticed that for the Gaussian, a simple scaling of the MLE estimator makes it unbiased. Why is this scaling not a standard procedure? I mean -- Why is it that after MLE computation, it is not routine to find the necessary scaling to make the estimator unbiased? The standard practice seems to be the plain computation of the MLE estimates, except of course for the well known Gaussian case where the scaling factor is well known.
Unbiasedness isn't necessarily especially important on its own. Aside a very limited set of circumstances, most useful estimators are biased, however they're obtained. If two estimators have the same variance, one can readily mount an argument for preferring an unbiased one to a biased one, but that's an unusual situation to be in (that is, you may reasonably prefer unbiasedness, ceteris paribus -- but those pesky ceteris are almost never paribus ). More typically, if you want unbiasedness you'll be adding some variance to get it, and then the question would be why would you do that ? Bias is how far the expected value of my estimator will be too high on average (with negative bias indicating too low). When I'm considering a small sample estimator, I don't really care about that. I'm usually more interested in how far wrong my estimator will be in this instance - my typical distance from right... something like a root-mean-square error or a mean absolute error would make more sense. So if you like low variance and low bias, asking for say a minimum mean square error estimator would make sense; these are very rarely unbiased. Bias and unbiasedness is a useful notion to be aware of, but it's not an especially useful property to seek unless you're only comparing estimators with the same variance. ML estimators tend to be low-variance; they're usually not minimum MSE, but they often have lower MSE than than modifying them to be unbiased (when you can do it at all) would give you. As an example, consider estimating variance when sampling from a normal distribution $\hat{\sigma}^2_\text{MMSE} = \frac{S^2}{n+1}, \hat{\sigma}^2_\text{MLE} = \frac{S^2}{n}, \hat{\sigma}^2_\text{Unb} = \frac{S^2}{n-1}$ (indeed the MMSE for the variance always has a larger denominator than $n-1$).
{ "source": [ "https://stats.stackexchange.com/questions/183006", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86275/" ] }
183,236
It is a common practice to apply PCA (principal component analysis) before a clustering algorithm (such as k-means). It is believed that it improves the clustering results in practice (noise reduction). However I am interested in a comparative and in-depth study of the relationship between PCA and k-means. For example, Chris Ding and Xiaofeng He, 2004, K-means Clustering via Principal Component Analysis showed that "principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering". However, I have hard time understanding this paper, and Wikipedia actually claims that it is wrong . Also, the results of the two methods are somewhat different in the sense that PCA helps to reduce the number of "features" while preserving the variance, whereas clustering reduces the number of "data-points" by summarizing several points by their expectations/means (in the case of k-means). So if the dataset consists in $N$ points with $T$ features each, PCA aims at compressing the $T$ features whereas clustering aims at compressing the $N$ data-points. I am looking for a layman explanation of the relations between these two techniques + some more technical papers relating the two techniques.
It is true that K-means clustering and PCA appear to have very different goals and at first sight do not seem to be related. However, as explained in the Ding & He 2004 paper K-means Clustering via Principal Component Analysis , there is a deep connection between them. The intuition is that PCA seeks to represent all $n$ data vectors as linear combinations of a small number of eigenvectors, and does it to minimize the mean-squared reconstruction error. In contrast, K-means seeks to represent all $n$ data vectors via small number of cluster centroids, i.e. to represent them as linear combinations of a small number of cluster centroid vectors where linear combination weights must be all zero except for the single $1$ . This is also done to minimize the mean-squared reconstruction error. So K-means can be seen as a super-sparse PCA. Ding & He paper makes this connection more precise. Unfortunately, the Ding & He paper contains some sloppy formulations (at best) and can easily be misunderstood. E.g. it might seem that Ding & He claim to have proved that cluster centroids of K-means clustering solution lie in the $(K-1)$ -dimensional PCA subspace: Theorem 3.3. Cluster centroid subspace is spanned by the first $K-1$ principal directions [...]. For $K=2$ this would imply that projections on PC1 axis will necessarily be negative for one cluster and positive for another cluster, i.e. PC2 axis will separate clusters perfectly. This is either a mistake or some sloppy writing; in any case, taken literally, this particular claim is false. Let's start with looking at some toy examples in 2D for $K=2$ . I generated some samples from the two normal distributions with the same covariance matrix but varying means. I then ran both K-means and PCA. The following figure shows the scatter plot of the data above, and the same data colored according to the K-means solution below. I also show the first principal direction as a black line and class centroids found by K-means with black crosses. PC2 axis is shown with the dashed black line. K-means was repeated $100$ times with random seeds to ensure convergence to the global optimum. One can clearly see that even though the class centroids tend to be pretty close to the first PC direction, they do not fall on it exactly. Moreover, even though PC2 axis separates clusters perfectly in subplots 1 and 4, there is a couple of points on the wrong side of it in subplots 2 and 3. So the agreement between K-means and PCA is quite good, but it is not exact. So what did Ding & He prove? For simplicity, I will consider only $K=2$ case. Let the number of points assigned to each cluster be $n_1$ and $n_2$ and the total number of points $n=n_1+n_2$ . Following Ding & He, let's define cluster indicator vector $\mathbf q\in\mathbb R^n$ as follows: $q_i = \sqrt{n_2/nn_1}$ if $i$ -th points belongs to cluster 1 and $q_i = -\sqrt{n_1/nn_2}$ if it belongs to cluster 2. Cluster indicator vector has unit length $\|\mathbf q\| = 1$ and is "centered", i.e. its elements sum to zero $\sum q_i = 0$ . Ding & He show that K-means loss function $\sum_k \sum_i (\mathbf x_i^{(k)} - \boldsymbol \mu_k)^2$ (that K-means algorithm minimizes), where $x_i^{(k)}$ is the $i$ -th element in cluster $k$ , can be equivalently rewritten as $-\mathbf q^\top \mathbf G \mathbf q$ , where $\mathbf G$ is the $n\times n$ Gram matrix of scalar products between all points: $\mathbf G = \mathbf X_c \mathbf X_c^\top$ , where $\mathbf X$ is the $n\times 2$ data matrix and $\mathbf X_c$ is the centered data matrix. (Note: I am using notation and terminology that slightly differs from their paper but that I find clearer). So the K-means solution $\mathbf q$ is a centered unit vector maximizing $\mathbf q^\top \mathbf G \mathbf q$ . It is easy to show that the first principal component (when normalized to have unit sum of squares) is the leading eigenvector of the Gram matrix, i.e. it is also a centered unit vector $\mathbf p$ maximizing $\mathbf p^\top \mathbf G \mathbf p$ . The only difference is that $\mathbf q$ is additionally constrained to have only two different values whereas $\mathbf p$ does not have this constraint. In other words, K-means and PCA maximize the same objective function , with the only difference being that K-means has additional "categorical" constraint. It stands to reason that most of the times the K-means (constrained) and PCA (unconstrained) solutions will be pretty to close to each other, as we saw above in the simulation, but one should not expect them to be identical. Taking $\mathbf p$ and setting all its negative elements to be equal to $-\sqrt{n_1/nn_2}$ and all its positive elements to $\sqrt{n_2/nn_1}$ will generally not give exactly $\mathbf q$ . Ding & He seem to understand this well because they formulate their theorem as follows: Theorem 2.2. For K-means clustering where $K= 2$ , the continuous solution of the cluster indicator vector is the [first] principal component Note that words "continuous solution". After proving this theorem they additionally comment that PCA can be used to initialize K-means iterations which makes total sense given that we expect $\mathbf q$ to be close to $\mathbf p$ . But one still needs to perform the iterations, because they are not identical. However, Ding & He then go on to develop a more general treatment for $K>2$ and end up formulating Theorem 3.3 as Theorem 3.3. Cluster centroid subspace is spanned by the first $K-1$ principal directions [...]. I did not go through the math of Section 3, but I believe that this theorem in fact also refers to the "continuous solution" of K-means, i.e. its statement should read "cluster centroid space of the continuous solution of K-means is spanned [...]". Ding & He, however, do not make this important qualification, and moreover write in their abstract that Here we prove that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering. Equivalently, we show that the subspace spanned by the cluster centroids are given by spectral expansion of the data covariance matrix truncated at $K-1$ terms. The first sentence is absolutely correct, but the second one is not. It is not clear to me if this is a (very) sloppy writing or a genuine mistake. I have very politely emailed both authors asking for clarification. (Update two months later: I have never heard back from them.) Matlab simulation code figure('Position', [100 100 1200 600]) n = 50; Sigma = [2 1.8; 1.8 2]; for i=1:4 means = [0 0; i*2 0]; rng(42) X = [bsxfun(@plus, means(1,:), randn(n,2) * chol(Sigma)); ... bsxfun(@plus, means(2,:), randn(n,2) * chol(Sigma))]; X = bsxfun(@minus, X, mean(X)); [U,S,V] = svd(X,0); [ind, centroids] = kmeans(X,2, 'Replicates', 100); subplot(2,4,i) scatter(X(:,1), X(:,2), [], [0 0 0]) subplot(2,4,i+4) hold on scatter(X(ind==1,1), X(ind==1,2), [], [1 0 0]) scatter(X(ind==2,1), X(ind==2,2), [], [0 0 1]) plot([-1 1]*10*V(1,1), [-1 1]*10*V(2,1), 'k', 'LineWidth', 2) plot(centroids(1,1), centroids(1,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4) plot(centroids(1,1), centroids(1,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2) plot(centroids(2,1), centroids(2,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4) plot(centroids(2,1), centroids(2,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2) plot([-1 1]*5*V(1,2), [-1 1]*5*V(2,2), 'k--') end for i=1:8 subplot(2,4,i) axis([-8 8 -8 8]) axis square set(gca,'xtick',[],'ytick',[]) end
{ "source": [ "https://stats.stackexchange.com/questions/183236", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67168/" ] }
183,265
Let's say I have some data, and then I fit the data with a model (a non-linear regression). Then I calculate the R-squared ($R^2$). When R-squared is negative, what does that mean? Does that mean my model is bad? I know the range of $R^2$ can be [-1,1]. When $R^2$ is 0, what does that mean as well?
$R^2$ can be negative, it just means that: The model fits your data very badly You did not set an intercept To the people saying that $R^2$ is between 0 and 1, this is not the case. While a negative value for something with the word 'squared' in it might sound like it breaks the rules of maths, it can happen in an $R^2$ model without an intercept. To understand why, we need to look at how $R^2$ is calculated. This is a bit long - If you want the answer without understanding it, then skip to the end. Otherwise, I've tried to write this in simple words. First, let's define 3 variables: $RSS$ , $TSS$ and $ESS$ . Calculating RSS : For every independent variable $x$ , we have the dependent variable $y$ . We plot a linear line of best fit, which predicts the value of $y$ for each value of $x$ . Let's call the values of $y$ the line predicts $\hat y$ . The error between what your line predicts and what the actual $y$ value is can be calculated be subtraction. All these differences are squared and added up, which gives the Residual Sum of Squares $RSS$ . Putting that into an equation, $RSS = \sum (y - \hat y)^2$ Calculating TSS : We can calculate the average value of $y$ , which is called $\bar y$ . If we plot $\bar y$ , it is just a horizontal line through the data because it is constant. What we can do with it though, is subtract $\bar y$ (the average value of $y$ ) from every actual value of $y$ . The result is squared and added together, which gives the total sum of squares $TSS$ . Putting that into an equation $TSS = \sum (y - \bar y)^2$ Calculating ESS : The differences between $\hat y$ (the values of $y$ predicted by the line) and the average value $\bar y$ are squared and added. This is the Explained sum of squares, which equals $\sum (\hat y - \bar y)^2$ Remember, $TSS = \sum (y - \bar y)^2$ , but we can add a $ + \hat y - \hat y$ into it, because it cancels itself out. Therefore, $TSS = \sum (y - \hat y + \hat y -\bar y)^2$ . Expanding these brackets, we get $TSS = \sum (y - \hat y)^2 + 2* \sum (y - \hat y)(\hat y - \bar y) + \sum (\hat y - \bar y)^2$ When, and only when the line is plotted with an intercept, the following is always true: $2* \sum (y - \hat y)(\hat y - \bar y) = 0$ . Therefore, $TSS = \sum (y - \hat y)^2 + \sum (\hat y - \bar y)^2$ , which you may notice just means that $TSS = RSS + ESS$ . If we divide all terms by $TSS$ and rearrange, we get $1 - \frac {RSS}{TSS} = \frac {ESS}{TSS}$ . Here's the important part : $R^2$ is defined as how much of the variance is explained by your model (how good your model is). In equation form, that's $R^2 = 1 - \frac {RSS}{TSS}$ . Look familiar? When the line is plotted with an intercept, we can substitute this as $R^2 = \frac {ESS}{TSS}$ . Since both the numerator and demoninator are sums of squares, $R^2$ must be positive. BUT When we don't specify an intercept, $2* \sum (y - \hat y)(\hat y - \bar y)$ does not necessarily equal $0$ . This means that $TSS = RSS + ESS + 2* \sum (y - \hat y)(\hat y - \bar y)$ . Dividing all terms by $TSS$ , we get $1 - \frac{RSS}{TSS} = \frac {ESS + 2* \sum (y - \hat y)(\hat y - \bar y)}{TSS}$ . Finally, we substitute to get $R^2 = \frac {ESS + 2* \sum (y - \hat y)(\hat y - \bar y)}{TSS}$ . This time, the numerator has a term in it which is not a sum of squares, so it can be negative. This would make $R^2$ negative. When would this happen? $2* \sum (y - \hat y)(\hat y - \bar y)$ would be negative when $y - \hat y$ is negative and $\hat y - \bar y$ is positive, or vice versa. This occurs when the horizontal line of $\bar y$ actually explains the data better than the line of best fit. Here's an exaggerated example of when $R^2$ is negative (Source: University of Houston Clear Lake) Put simply: When $R^2 < 0$ , a horizontal line explains the data better than your model. You also asked about $R^2 = 0$ . When $R^2 = 0$ , a horizontal line explains the data equally as well as your model. I commend you for making it through that. If you found this helpful, you should also upvote fcop's answer here which I had to refer to, because it's been a while.
{ "source": [ "https://stats.stackexchange.com/questions/183265", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41749/" ] }
183,285
The data has about 40 features and 500,000 instances. And the data is sparse. I wish to fit a svm model with the data. To fit svm, I need to first scale the data. However, if the data contains many outliers, scaling is likely to not work very well. So the problem is how can I find outliers in the data?
$R^2$ can be negative, it just means that: The model fits your data very badly You did not set an intercept To the people saying that $R^2$ is between 0 and 1, this is not the case. While a negative value for something with the word 'squared' in it might sound like it breaks the rules of maths, it can happen in an $R^2$ model without an intercept. To understand why, we need to look at how $R^2$ is calculated. This is a bit long - If you want the answer without understanding it, then skip to the end. Otherwise, I've tried to write this in simple words. First, let's define 3 variables: $RSS$ , $TSS$ and $ESS$ . Calculating RSS : For every independent variable $x$ , we have the dependent variable $y$ . We plot a linear line of best fit, which predicts the value of $y$ for each value of $x$ . Let's call the values of $y$ the line predicts $\hat y$ . The error between what your line predicts and what the actual $y$ value is can be calculated be subtraction. All these differences are squared and added up, which gives the Residual Sum of Squares $RSS$ . Putting that into an equation, $RSS = \sum (y - \hat y)^2$ Calculating TSS : We can calculate the average value of $y$ , which is called $\bar y$ . If we plot $\bar y$ , it is just a horizontal line through the data because it is constant. What we can do with it though, is subtract $\bar y$ (the average value of $y$ ) from every actual value of $y$ . The result is squared and added together, which gives the total sum of squares $TSS$ . Putting that into an equation $TSS = \sum (y - \bar y)^2$ Calculating ESS : The differences between $\hat y$ (the values of $y$ predicted by the line) and the average value $\bar y$ are squared and added. This is the Explained sum of squares, which equals $\sum (\hat y - \bar y)^2$ Remember, $TSS = \sum (y - \bar y)^2$ , but we can add a $ + \hat y - \hat y$ into it, because it cancels itself out. Therefore, $TSS = \sum (y - \hat y + \hat y -\bar y)^2$ . Expanding these brackets, we get $TSS = \sum (y - \hat y)^2 + 2* \sum (y - \hat y)(\hat y - \bar y) + \sum (\hat y - \bar y)^2$ When, and only when the line is plotted with an intercept, the following is always true: $2* \sum (y - \hat y)(\hat y - \bar y) = 0$ . Therefore, $TSS = \sum (y - \hat y)^2 + \sum (\hat y - \bar y)^2$ , which you may notice just means that $TSS = RSS + ESS$ . If we divide all terms by $TSS$ and rearrange, we get $1 - \frac {RSS}{TSS} = \frac {ESS}{TSS}$ . Here's the important part : $R^2$ is defined as how much of the variance is explained by your model (how good your model is). In equation form, that's $R^2 = 1 - \frac {RSS}{TSS}$ . Look familiar? When the line is plotted with an intercept, we can substitute this as $R^2 = \frac {ESS}{TSS}$ . Since both the numerator and demoninator are sums of squares, $R^2$ must be positive. BUT When we don't specify an intercept, $2* \sum (y - \hat y)(\hat y - \bar y)$ does not necessarily equal $0$ . This means that $TSS = RSS + ESS + 2* \sum (y - \hat y)(\hat y - \bar y)$ . Dividing all terms by $TSS$ , we get $1 - \frac{RSS}{TSS} = \frac {ESS + 2* \sum (y - \hat y)(\hat y - \bar y)}{TSS}$ . Finally, we substitute to get $R^2 = \frac {ESS + 2* \sum (y - \hat y)(\hat y - \bar y)}{TSS}$ . This time, the numerator has a term in it which is not a sum of squares, so it can be negative. This would make $R^2$ negative. When would this happen? $2* \sum (y - \hat y)(\hat y - \bar y)$ would be negative when $y - \hat y$ is negative and $\hat y - \bar y$ is positive, or vice versa. This occurs when the horizontal line of $\bar y$ actually explains the data better than the line of best fit. Here's an exaggerated example of when $R^2$ is negative (Source: University of Houston Clear Lake) Put simply: When $R^2 < 0$ , a horizontal line explains the data better than your model. You also asked about $R^2 = 0$ . When $R^2 = 0$ , a horizontal line explains the data equally as well as your model. I commend you for making it through that. If you found this helpful, you should also upvote fcop's answer here which I had to refer to, because it's been a while.
{ "source": [ "https://stats.stackexchange.com/questions/183285", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95466/" ] }