source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
183,871
What are the pro & cons of both the methods?
Maximum likelihood estimation is a general approach to estimating parameters in statistical models by maximizing the likelihood function defined as $$ L(\theta|X) = f(X|\theta) $$ that is, the probability of obtaining data $X$ given some value of parameter $\theta$. Knowing the likelihood function for a given problem you can look for such $\theta$ that maximizes the probability of obtaining the data you have. Sometimes we have known estimators, e.g. arithmetic mean is an MLE estimator for $\mu$ parameter for normal distribution , but in other cases you can use different methods that include using optimization algorithms. ML approach does not tell you how to find the optimal value of $\theta$ -- you can simply take guesses and use the likelihood to compare which guess was better -- it just tells you how you can compare if one value of $\theta$ is "more likely" than the other. Gradient descent is an optimization algorithm . You can use this algorithm to find minimum (or maximum, then it is called gradient ascent ) of many different functions. The algorithm does not really care what is the function that it minimizes, it just does what it was asked for. So with using optimization algorithm you have to know somehow how could you tell if one value of the parameter of interest is "better" than the other. You have to provide your algorithm some function to minimize and the algorithm will deal with finding its minimum. You can obtain maximum likelihood estimates using different methods and using an optimization algorithm is one of them. On another hand, gradient descent can be also used to maximize functions other than likelihood function.
{ "source": [ "https://stats.stackexchange.com/questions/183871", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54214/" ] }
184,017
I'm using scikit-learn to perform a logistic regression with crossvalidation on a set of data (about 14 parameters with >7000 normalised observations). I also have a target classifier which has a value of either 1 or 0. The problem I have is that regardless of the solver used, I keep getting convergence warnings... model1 = linear_model.LogisticRegressionCV(cv=10,verbose=1,n_jobs=-1,scoring='roc_auc',solver='newton-cg',penalty='l2') /home/b/anaconda/lib/python2.7/site-packages/scipy/optimize/linesearch.py:285: LineSearchWarning: The line search algorithm did not converge warn('The line search algorithm did not converge', LineSearchWarning) /home/b/anaconda/lib/python2.7/site-packages/sklearn/utils/optimize.py:193: UserWarning: Line Search failed model2 = linear_model.LogisticRegressionCV(cv=10,verbose=1,n_jobs=-1,scoring='roc_auc',solver='sag',penalty='l2') max_iter reached after 2 seconds max_iter reached after 2 seconds max_iter reached after 2 seconds max_iter reached after 2 seconds max_iter reached after 2 seconds max_iter reached after 2 seconds max_iter reached after 2 second model3 = linear_model.LogisticRegressionCV(cv=10,verbose=1,n_jobs=-1,scoring='roc_auc',solver='lbfgs',penalty='l2') /home/b/anaconda/lib/python2.7/site-packages/sklearn/linear_model/logistic.py:701: UserWarning: lbfgs failed to converge. Increase the number of iterations. warnings.warn("lbfgs failed to converge. Increase the number " model4 = linear_model.LogisticRegressionCV(cv=10,verbose=1,n_jobs=-1,scoring='roc_auc',solver='liblinear',penalty='l2') cg reaches trust region boundary iter 18 act 1.382e+06 pre 1.213e+06 delta 1.860e+01 f 7.500e+06 |g| 1.696e+06 CG 8 iter 2 act 1.891e+06 pre 1.553e+06 delta 1.060e-01 f 1.397e+07 |g| 1.208e+08 CG 4 iter 4 act 2.757e+04 pre 2.618e+04 delta 1.063e-01 f 1.177e+07 |g| 2.354e+07 CG 4 iter 18 act 1.659e+04 pre 1.597e+04 delta 1.506e+01 f 7.205e+06 |g| 4.078e+06 CG 4 cg reaches trust region boundary iter 7 act 1.117e+05 pre 1.090e+05 delta 4.146e-01 f 1.161e+07 |g| 9.522e+05 CG 4 iter 31 act 1.748e+03 pre 1.813e+03 delta 2.423e+01 f 6.228e+05 |g| 5.657e+03 CG 14 What do I need to do to stop getting the warnings?
You can start by applying program's suggestion to increase max_iter parameter; but have in mind that it is also possible that your data simply can't be fit by a logistic model.
{ "source": [ "https://stats.stackexchange.com/questions/184017", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/93237/" ] }
184,019
Note: I know that L1 has feature selection property. I am trying to understand which one to choose when feature selection is completely irrelevant. How to decide which regularization (L1 or L2) to use? What are the pros & cons of each of L1 / L2 regularization? Is it recommended to 1st do feature selection using L1 & then apply L2 on these selected variables?
How to decide which regularization (L1 or L2) to use? What is your goal? Both can improve model generalization by penalizing coefficients, since features with opposite relationship to the outcome can "offset" each other (a large positive value is counterbalanced by a large negative value). This can arise when there are collinear features. Small changes in the data can result in dramatically different parameter estimates (high variance estimates). Penalization can restrain both coefficients to be smaller. (Hastie et al, Elements of Statistical Learning , 2nd edition, p. 63) What are the pros & cons of each of L1 / L2 regularization? L1 regularization can address the multicollinearity problem by constraining the coefficient norm and pinning some coefficient values to 0. Computationally, Lasso regression (regression with an L1 penalty) is a quadratic program which requires some special tools to solve. When you have more features than observations $N$ , lasso will keep at most $N$ non-zero coefficients . Depending on context, that might not be what you want. L1 regularization is sometimes used as a feature selection method. Suppose you have some kind of hard cap on the number of features you can use (because data collection for all features is expensive, or you have tight engineering constraints on how many values you can store, etc.). You can try to tune the L1 penalty to hit your desired number of non-zero features. L2 regularization can address the multicollinearity problem by constraining the coefficient norm and keeping all the variables. It's unlikely to estimate a coefficient to be exactly 0. This isn't necessarily a drawback, unless a sparse coefficient vector is important for some reason. In the regression setting, it's the "classic" solution to the problem of estimating a regression with more features than observations. L2 regularization can estimate a coefficient for each feature even if there are more features than observations (indeed, this was the original motivation for "ridge regression"). As an alternative, elastic net allows L1 and L2 regularization as special cases. A typical use-case in for a data scientist in industry is that you just want to pick the best model, but don't necessarily care if it's penalized using L1, L2 or both. Elastic net is nice in situations like these. Is it recommended to 1st do feature selection using L1 & then apply L2 on these selected variables? I'm not familiar with a publication proposing an L1-then-L2 pipeline, but this is probably just ignorance on my part. There doesn't seem to be anything wrong with it. I'd conduct a literature review. A few examples of similar "phased" pipelines exist. One is the "relaxed lasso", which applies lasso regression twice , once to down-select from a large group to a small group of features, and second to estimate coefficients for use in a model. This uses cross-validation at each step to choose the magnitude of the penalty. The reasoning is that in the first step, you cross-validate and will likely choose a large penalty to screen out irrelevant predictors; in the second step, you cross-validate and will likely pick a smaller penalty (and hence larger coefficients). This is mentioned briefly in Elements of Statistical Learning with a citation to Nicolai Meinshausen ("Relaxed Lasso." Computational Statistics & Data Analysis Volume 52, Issue 1, 15 September 2007, pp 374-393). User @amoeba also suggests an L1-then-OLS pipeline; this might be nice because it only has 1 hyperparameter for the magnitude of the L1 penalty, so less fiddling would be required. One problem that can arise with any "phased" analysis pipeline (that is, a pipeline which does some steps, and then some other steps separately) is that there's no "visibility" between those different phases (algorithms applied at each step). This means that one process inherits any data snooping that happened at the previous steps. This effect is not negligible; poorly-conceived modeling can result in garbage models. One way to hedge against data-snooping side-effects is to cross-validate all of your choices. However, the increased computational costs can be prohibitive, depending on the scale of the data and the complexity of each step.
{ "source": [ "https://stats.stackexchange.com/questions/184019", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54214/" ] }
184,029
Is elastic net regularization always preferred to Lasso & Ridge since it seems to solve the drawbacks of these methods? What is the intuition and what is the math behind elastic net?
1. Which method is preferred? Yes, elastic net is always preferred over lasso & ridge regression because it solves the limitations of both methods, while also including each as special cases. So if the ridge or lasso solution is, indeed, the best, then any good model selection routine will identify that as part of the modeling process. Comments to my post have pointed out that the advantages of elastic net are not unqualified. I persist in my belief that the generality of the elastic net regression is still preferable to either $L^1$ or $L^2$ regularization on its own. Specifically, I think that the points of contention between myself and others are directly tied to what assumptions we are willing to make about the modeling process. In the presence of strong knowledge about the underlying data, some methods will be preferred to others. However, my preference for elastic net is rooted in my skepticism that one will confidently know that $L^1$ or $L^2$ is the true model. Claim: Prior knowledge may obviate one of the need to use elastic net regression. This is somewhat circular. Forgive me if this is somewhat glib, but if you know that LASSO (ridge) is the best solution, then you won't ask yourself how to appropriately model it; you'll just fit a LASSO (ridge) model. If you're absolutely sure that the correct answer is LASSO (ridge) regression, then you're clearly convinced that there would be no reason to waste time fitting an elastic net. But if you're slightly less certain whether LASSO (ridge) is the correct way to proceed, I believe it makes sense to estimate a more flexible model, and evaluate how strongly the data support the prior belief. Claim: Modestly large data will not permit discovery of $L^1$ or $L^2$ solutions as preferred, even in cases when the $L^1$ or $L^2$ solution is the true model. This is also true, but I think it's circular for a similar reason: if you've estimated an optimal solution and find that $\alpha\not\in \{0,1\},$ then that's the model that the data support. On the one hand, yes, your estimated model is not the true model, but I must wonder how one would know that the true model is $\alpha=1$ (or $\alpha=0$ ) prior to any model estimation. There might be domains where you have this kind of prior knowledge, but my professional work is not one of them. Claim: Introducing additional hyperparameters increases the computational cost of estimating the model. This is only relevant if you have tight time/computer limitations; otherwise it's just a nuisance. GLMNET is the gold-standard algorithm for estimating elastic net solutions. The user supplies some value of alpha, and it uses the path properties of the regularization solution to quickly estimate a family of models for a variety of values of the penalization magnitude $\lambda$ , and it can often estimate this family of solutions more quickly than estimating just one solution for a specific value $\lambda$ . So, yes, using GLMNET does consign you to the domain of using grid-style methods (iterate over some values of $\alpha$ and let GLMNET try a variety of $\lambda$ s), but it's pretty fast. Claim: Improved performance of elastic net over LASSO or ridge regression is not guaranteed. This is true, but at the step where one is contemplating which method to use, one will not know which of elastic net, ridge or LASSO is the best. If one reasons that the best solution must be LASSO or ridge regression, then we're in the domain of claim (1). If we're still uncertain which is best, then we can test LASSO, ridge and elastic net solutions, and make a choice of a final model at that point (or, if you're an academic, just write your paper about all three). This situation of prior uncertainty will either place us in the domain of claim (2), where the true model is LASSO/ridge but we did not know so ahead of time, and we accidentally select the wrong model due to poorly identified hyperparameters, or elastic net is actually the best solution. Claim: Hyperparameter selection without cross-validation is highly biased and error-prone . Proper model validation is an integral part of any machine learning enterprise. Model validation is usually an expensive step, too, so one would seek to minimize inefficiencies here -- if one of those inefficiencies is needlessly trying $\alpha$ values that are known to be futile, then one suggestion might be to do so. Yes, by all means do that, if you're comfortable with the strong statement that you're making about how your data are arranged -- but we're back to the territory of claim (1) and claim (2). 2. What's the intuition and math behind elastic net? I strongly suggest reading the literature on these methods, starting with the original paper on the elastic net. The paper develops the intuition and the math, and is highly readable. Reproducing it here would only be to the detriment of the authors' explanation. But the high-level summary is that the elastic net is a convex sum of ridge and lasso penalties, so the objective function for a Gaussian error model looks like $$\text{Residual Mean Square Error}+\alpha \cdot \text{Ridge Penalty}+(1-\alpha)\cdot \text{LASSO Penalty}$$ for $\alpha\in[0,1].$ Hui Zou and Trevor Hastie. " Regularization and variable selection via the elastic net ." J. R. Statistic. Soc., vol 67 (2005), Part 2., pp. 301-320. Richard Hardy points out that this is developed in more detail in Hastie et al. "The Elements of Statistical Learning" chapters 3 and 18. 3. What if you add additional $L^q$ norms? This is a question posed to me in the comments: Let me suggest one further argument against your point of view that elastic net is uniformly better than lasso or ridge alone. Imagine that we add another penalty to the elastic net cost function, e.g. an $L^3$ cost, with a hyperparameter $\gamma$ . I don't think there is much research on that, but I would bet you that if you do a cross-validation search on a 3d parameter grid, then you will get $\gamma\not =0$ as the optimal value. If so, would you then argue that it is always a good idea to include $L^3$ cost too. I appreciate that the spirit of the question is "If it's as you claim and two penalties are good, why not add another?" But I think the answer lies in why we regularize in the first place. $L^1$ regularization tends to produce sparse solutions, but also tends to select the feature most strongly correlated with the outcome and zero out the rest. Moreover, in a data set with $n$ observations, it can select at most $n$ features. $L_2$ regularization is suited to deal with ill-posed problems resulting from highly (or perfectly) correlated features. In a data set with $p$ features, $L_2$ regularization can be used to uniquely identify a model in the $p>n$ case. Setting aside either of these problems, the regularized model can still out-perform the ML model because the shrinkage properties of the estimators are "pessimistic" and pull coefficients toward 0. But I am not aware of the statistical properties for $L^3$ regularization. In the problems I've worked on, we generally face both problems: the inclusion of poorly correlated features (hypotheses that are not borne out by the data), and co-linear features. Indeed, there are compelling reasons that $L^1$ and $L^2$ penalties on parameters are the only ones typically used. In Why do we only see $L_1$ and $L_2$ regularization but not other norms? , @whuber offers this comment: I haven't investigated this question specifically, but experience with similar situations suggests there may be a nice qualitative answer: all norms that are second differentiable at the origin will be locally equivalent to each other, of which the $L^2$ norm is the standard. All other norms will not be differentiable at the origin and $L^1$ qualitatively reproduces their behavior. That covers the gamut. In effect, a linear combination of an $L^1$ and $L^2$ norm approximates any norm to second order at the origin--and this is what matters most in regression without outlying residuals. So we can effectively cover the range of options which could possibly be provided by $L^q$ norms as combinations of $L^1$ and $L^2$ norms -- all without requiring additional hyperparameter tuning.
{ "source": [ "https://stats.stackexchange.com/questions/184029", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/54214/" ] }
184,095
Suppose I trained several models on training set, choose best one using cross validation set and measured performance on test set. So now I have one final best model. Should I retrain it on my all available data or ship solution trained only on training set? If latter, then why? UPDATE: As @P.Windridge noted, shipping a retrained model basically means shipping a model without validation. But we can report test set performance and after that retrain the model on complete data righteously expecting the performance to be better - because we use our best model plus more data. What problems may arise from such methodology?
You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is data splitting a lost opportunity to directly model sample differences in an overall model, but it is unstable unless your whole sample is perhaps larger than 15,000 subjects. This is why 100 repeats of 10-fold cross-validation is necessary (depending on the sample size) to achieve precision and stability, and why the bootstrap for strong internal validation is even better. The bootstrap also exposes how difficult and arbitrary is the task of feature selection. I have described the problems with 'external' validation in more detail in BBR Chapter 10 .
{ "source": [ "https://stats.stackexchange.com/questions/184095", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/90527/" ] }
184,325
How does the inversion method work? Say I have a random sample $X_1,X_2,...,X_n$ with density $f(x;\theta)={1\over \theta} x^{(1-\theta)\over \theta}$ over $0<x<1$ and therefore with cdf $F_X(x)=x^{1/\theta}$ on $(0,1)$. Then by the inversion method I get the distribution of $X$ as $F_X^{-1}(u)=u^\theta$. So does $u^\theta$ has the distribution of $X$? Is this how the inversion method works? u<-runif(n) x<-u^(theta)
The method is very simple, so I'll describe it in simple words. First, take cumulative distribution function $F_X$ of some distribution that you want to sample from. The function takes as input some value $x$ and tells you what is the probability of obtaining $X \leq x$. So $$ F_X(x) = \Pr(X \leq x) = p $$ inverse of such function function, $F_X^{-1}$ would take $p$ as input and return $x$. Notice that $p$'s are uniformly distributed -- this could be used for sampling from any $F_X$ if you know $F_X^{-1}$. The method is called the inverse transform sampling . The idea is very simple: it is easy to sample values uniformly from $U(0, 1)$, so if you want to sample from some $F_X$, just take values $u \sim U(0, 1)$ and pass $u$ through $F_X^{-1}$ to obtain $x$'s $$ F_X^{-1}(u) = x $$ or in R (for normal distribution) U <- runif(1e6) X <- qnorm(U) To visualize it look at CDF below, generally, we think of distributions in terms of looking at $y$-axis for probabilities of values from $x$-axis. With this sampling method we do the opposite and start with "probabilities" and use them to pick the values that are related to them. With discrete distributions you treat $U$ as a line from $0$ to $1$ and assign values based on where does some point $u$ lie on this line (e.g. $0$ if $0 \leq u < 0.5$ or $1$ if $0.5 \leq u \leq 1$ for sampling from $\mathrm{Bernoulli}(0.5)$). Unfortunately, this is not always possible since not every function has its inverse, e.g. you cannot use this method with bivariate distributions. It also does not have to be the most efficient method in all situations, in many cases better algorithms exist. You also ask what is the distribution of $F_X^{-1}(u)$. Since $F_X^{-1}$ is an inverse of $F_X$, then $F_X(F_X^{-1}(u)) = u$ and $F_X^{-1}(F_X(x)) = x$, so yes, values obtained using such method have the same distribution as $X$. You can check this by a simple simulation U <- runif(1e6) all.equal(pnorm(qnorm(U)), U)
{ "source": [ "https://stats.stackexchange.com/questions/184325", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/30557/" ] }
184,657
Artificial intelligence website defines off-policy and on-policy learning as follows: "An off-policy learner learns the value of the optimal policy independently of the agent's actions. Q-learning is an off-policy learner. An on-policy learner learns the value of the policy being carried out by the agent including the exploration steps." I would like to ask your clarification regarding this, because they don't seem to make any difference to me. Both the definitions seem like they are identical. What I actually understood are the model-free and model-based learning, and I don't know if they have anything to do with the ones in question. How is it possible that the optimal policy is learned independently of the agent's actions? Isn't the policy learned when the agent performs the actions?
First of all, there's no reason that an agent has to do the greedy action ; Agents can explore or they can follow options . This is not what separates on-policy from off-policy learning. The reason that Q-learning is off-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the greedy action $a'$ . In other words, it estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy. The reason that SARSA is on-policy is that it updates its Q-values using the Q-value of the next state $s'$ and the current policy's action $a''$ . It estimates the return for state-action pairs assuming the current policy continues to be followed. The distinction disappears if the current policy is a greedy policy. However, such an agent would not be good since it never explores. Have you looked at the book available for free online? Richard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction. Second edition, MIT Press, Cambridge, MA, 2018.
{ "source": [ "https://stats.stackexchange.com/questions/184657", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67413/" ] }
184,753
As part of the output of a generalised linear model, the null and residual deviance are used to evaluate the model. I often see the formulas for these quantities expressed in terms of the log likelihood of the saturated model, for example: https://stats.stackexchange.com/a/113022/22199 , Logistic Regression : How to obtain a saturated model The saturated model, as far as I understand it, is the model that perfectly fits the observed response. Thus, in most places I have seen, the log-likelihood of the saturated model is always given as zero. Yet, the way the formula for deviance is given suggests that sometimes this quantity is non zero. (As if it is zero always, why bother including it?) In what cases can it be non zero? If it is never non-zero, why include it in the formula for deviance?
If you really meant log-likelihood , then the answer is: it's not always zero. For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$ . The log-likelihood for $Y = (y_1, \ldots, y_n)$ is given by: $$\ell(\mu; Y) = -\sum_{i = 1}^n \mu_i + \sum_{i = 1}^n y_i \log \mu_i - \sum_{i = 1}^n \log(y_i!). \tag{$*$}$$ Differentiate $\ell(\mu; Y)$ in $(*)$ with respect to $\mu_i$ and set it to $0$ (this is how we obtain the MLE for saturated model): $$-1 + \frac{y_i}{\mu_i} = 0.$$ Solve this for $\mu_i$ to get $\hat{\mu}_i = y_i$ , substituting $\hat{\mu}_i$ back into $(*)$ for $\mu_i$ gives that the log-likelihood of the saturated model is: $$\ell(\hat{\mu}; Y) = \sum_{i = 1}^n y_i(\log y_i - 1) -\sum_{i = 1}^n \log(y_i!) \neq 0$$ unless $y_i$ take very special values. In the help page of the R function glm , under the item deviance , the document explains this issue as follows: deviance up to a constant, minus twice the maximized log-likelihood. Where sensible, the constant is chosen so that a saturated model has deviance zero. Notice that it mentioned that the deviance , instead of the log-likelihood of the saturated model is chosen to be zero. Probably, what you really wanted to confirm is that "the deviance of the saturated model is always given as zero", which is true, since the deviance, by definition (see Section 4.5.1 of Categorical Data Analysis (2nd Edition) by Alan Agresti) is the likelihood ratio statistic of a specified GLM to the saturated model. The constant aforementioned in the R documentation is actually twice the maximized log-likelihood of the saturated model. Regarding your statement "Yet, the way the formula for deviance is given suggests that sometimes this quantity is non zero.", it is probably due to the abuse of usage of the term deviance . For instance, in R, the likelihood ratio statistic of comparing two arbitrary (nested) models $M_1$ and $M_2$ is also referred to as deviance, which would be more precisely termed as the difference between the deviance of $M_1$ and the deviance of $M_2$ , if we closely followed the definition as given in Agresti's book. Conclusion The log-likelihood of the saturated model is in general non-zero. The deviance (in its original definition) of the saturated model is zero. The deviance output from softwares (such as R) is in general non-zero as it actually means something else (the difference between deviances). The following are the derivation for the general exponential-family case and another concrete example. Suppose that data come from exponential family (see Modern Applied Statistics with S , Chapter $7$ ): $$f(y_i; \theta_i, \varphi) = \exp[A_i(y_i\theta_i - \gamma(\theta_i))/\varphi + \tau(y_i, \varphi/A_i)]. \tag{1}$$ where $A_i$ are known prior weights and $\varphi$ are dispersion/scale parameter (for many cases such as binomial and Poisson, this parameter is known, while for other cases such as normal and Gamma, this parameter is unknown). Then the log-likelihood is given by: $$\ell(\theta, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \theta_i - \gamma(\theta_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). $$ As in the Poisson example, the saturated model's parameters can be estimated by solving the following score function: $$0 = U(\theta_i) = \frac{\partial \ell(\theta, \varphi; Y)}{\partial \theta_i} = \frac{A_i(y_i - \gamma'(\theta_i))}{\varphi}$$ Denote the solution of the above equation by $\hat{\theta}_i$ , then the general form of the log-likelihood of the saturated model (treat the scale parameter as constant) is: $$\ell(\hat{\theta}, \varphi; Y) = \sum_{i = 1}^n A_i(y_i \hat{\theta}_i - \gamma(\hat{\theta}_i))/\varphi + \sum_{i = 1}^n \tau(y_i, \varphi/A_i). \tag{$**$}$$ In my previous answer, I incorrectly stated that the first term on the right side of $(**)$ is always zero, the above Poisson data example proves it is wrong. For a more complicated example, consider the Gamma distribution $\Gamma(\alpha, \beta)$ given in the appendix. Proof of the first term in the log-likelihood of saturated Gamma model is non-zero : Given $$f(y; \alpha, \beta) = \frac{\beta^\alpha}{\Gamma(\alpha)}e^{-\beta y}y^{\alpha - 1}, \quad y > 0, \alpha > 0, \beta > 0,$$ we must do reparameterization first so that $f$ has the exponential family form $(1)$ . It can be verified if letting $$\varphi = \frac{1}{\alpha},\, \theta = -\frac{\beta}{\alpha},$$ then $f$ has the representation: $$f(y; \theta, \varphi) = \exp\left[\frac{\theta y - (-\log(-\theta))}{\varphi}+ \tau(y, \varphi)\right],$$ where $$\tau(y, \varphi) = -\frac{\log \varphi}{\varphi} + \left(\frac{1}{\varphi} - 1\right)\log y - \log\Gamma(\varphi^{-1}).$$ Therefore, the MLEs of the saturated model are $\hat{\theta}_i = -\frac{1}{y_i}$ . Hence $$\sum_{i = 1}^n \frac{1}{\varphi}[\hat{\theta}_iy_i - (-\log(-\hat{\theta}_i))] = \sum_{i = 1}^n \frac{1}{\varphi}[-1 - \log(y_i)] \neq 0, $$ unless $y_i$ take very special values.
{ "source": [ "https://stats.stackexchange.com/questions/184753", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22199/" ] }
184,998
I think the following two formulas are true: $$ \mathrm{Var}(aX)=a^2 \mathrm{Var}(X) $$ while a is a constant number $$ \mathrm{Var}(X + Y)=\mathrm{Var}(X)+\mathrm{Var}(Y) $$ if $X$, $Y$ are independent However, I am not sure what is wrong with the below: $$\mathrm{Var}(2X) = \mathrm{Var}(X+X) = \mathrm{Var}(X) + \mathrm{Var}(X) $$ which does not equal to $2^2 \mathrm{Var}(X)$, i.e. $4\mathrm{Var}(X)$. If it is assumed that $X$ is the sample taken from a population, I think we can always assume $X$ to be independent from the other $X$s. So what is wrong with my confusion?
$\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent from the other $X$s." $X$ is not independent of $X$. The symbol $X$ is being used to refer to the same random variable here. Once you know the value of the first $X$ to appear in your formula, this also fixes the value of the second $X$ to appear. If you want them to refer to distinct (and potentially independent) random variables, you need to denote them with different letters (e.g. $X$ and $Y$) or using subscripts (e.g. $X_1$ and $X_2$); the latter is often (but not always) used to denote variables drawn from the same distribution. If two variables $X$ and $Y$ are independent then $\Pr(X=a|Y=b)$ is the same as $\Pr(X=a)$: knowing the value of $Y$ does not give us any additional information about the value of $X$. But $\Pr(X=a|X=b)$ is $1$ if $a=b$ and $0$ otherwise: knowing the value of $X$ gives you complete information about the value of $X$. [You can replace the probabilities in this paragraph by cumulative distribution functions, or where appropriate, probability density functions, to essentially the same effect.] Another way of seeing things is that if two variables are independent then they have zero correlation (though zero correlation does not imply independence !) but $X$ is perfectly correlated with itself, $\Corr(X,X)=1$ so $X$ can't be independent of itself. Note that since the covariance is given by $\Cov(X,Y)=\Corr(X,Y)\sqrt{\Var(X)\Var(Y)}$, then $$\Cov(X,X)=1\sqrt{\Var(X)^2}=\Var(X)$$ The more general formula for the variance of a sum of two random variables is $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ In particular, $\Cov(X,X) = \Var(X)$, so $$\Var(X+X) = \Var(X) + \Var(X) + 2\Var(X) = 4\Var(X)$$ which is the same as you would have deduced from applying the rule $$\Var(aX) = a^2 \Var(X) \implies \Var(2X) = 4\Var(X)$$ If you are interested in linearity, then you might be interested in the bilinearity of covariance. For random variables $W$, $X$, $Y$ and $Z$ (whether dependent or independent) and constants $a$, $b$, $c$ and $d$ we have $$\Cov(aW + bX, Y) = a \Cov(W,Y) + b \Cov(X,Y)$$ $$\Cov(X, cY + dZ) = c \Cov(X,Y) + d \Cov(X,Z)$$ and overall, $$\Cov(aW + bX, cY + dZ) = ac \Cov(W,Y) + ad \Cov(W,Z) + bc \Cov(X,Y) + bd \Cov(X,Z)$$ You can then use this to prove the (non-linear) results for variance that you wrote in your post: $$\Var(aX) = \Cov(aX, aX) = a^2 \Cov(X,X) = a^2 \Var(X)$$ $$ \begin{align} \Var(aX + bY) &= \Cov(aX + bY, aX + bY) \\ &= a^2 \Cov(X,X) + ab \Cov(X,Y) + ba \Cov (X,Y) + b^2 \Cov(Y,Y) \\ \Var(aX + bY) &= a^2 \Var(X) + b^2 \Var(Y) + 2ab \Cov(X,Y) \end{align} $$ The latter gives, as a special case when $a=b=1$, $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ When $X$ and $Y$ are uncorrelated (which includes the case where they are independent), then this reduces to $\Var(X+Y) = \Var(X) + \Var(Y)$. So if you want to manipulate variances in a "linear" way (which is often a nice way to work algebraically), then work with the covariances instead, and exploit their bilinearity.
{ "source": [ "https://stats.stackexchange.com/questions/184998", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/96542/" ] }
185,021
As a side project I am currently working on determining customer satisfaction over time for quite a large company. We have over 100,000 records in our dataset which need to be analysed. The dataset looks like this: ╔════════╦════════╦═════════╗ β•‘ CustID β•‘ Rating β•‘ Loyalty β•‘ β•‘ 3001 β•‘ 5 β•‘ 1 β•‘ β•‘ 3001 β•‘ 4 β•‘ 2 β•‘ β•‘ 3001 β•‘ 4 β•‘ 3 β•‘ β•‘ 3001 β•‘ 5 β•‘ 4 β•‘ β•‘ 5214 β•‘ 3 β•‘ 1 β•‘ β•‘ 5214 β•‘ 5 β•‘ 5 β•‘ β•‘ 5214 β•‘ 2 β•‘ 15 β•‘ β•‘ 5214 β•‘ 4 β•‘ 16 β•‘ β•šβ•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•©β•β•β•β•β•β•β•β•β•β• A customer can rate a product from 1 to 5. The loyalty is the amount of products the customer purchased from us. Please note: there are a lot of gaps in this data as customers don't always respond. As you can see, customer 5214 only responded 4 times while he purchased more than 16 products. The research question is: Do customers become happier with our products as they buy more? So I looked at both panel data and time-series cross-sectional data analysis, but this seems like the wrong direction to go because it gives detailed information about each specific customer (see here: http://www.princeton.edu/~otorres/Panel101R.pdf ) What we want is an overview of what customers generally speaking think. So I am looking for the best way to achieve this. Preferably in R, but this is of course not necessary. A link to a theory would also be incredibly helpful! I hope I have provided enough background information on the case. If not, please do let me know!
$\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent from the other $X$s." $X$ is not independent of $X$. The symbol $X$ is being used to refer to the same random variable here. Once you know the value of the first $X$ to appear in your formula, this also fixes the value of the second $X$ to appear. If you want them to refer to distinct (and potentially independent) random variables, you need to denote them with different letters (e.g. $X$ and $Y$) or using subscripts (e.g. $X_1$ and $X_2$); the latter is often (but not always) used to denote variables drawn from the same distribution. If two variables $X$ and $Y$ are independent then $\Pr(X=a|Y=b)$ is the same as $\Pr(X=a)$: knowing the value of $Y$ does not give us any additional information about the value of $X$. But $\Pr(X=a|X=b)$ is $1$ if $a=b$ and $0$ otherwise: knowing the value of $X$ gives you complete information about the value of $X$. [You can replace the probabilities in this paragraph by cumulative distribution functions, or where appropriate, probability density functions, to essentially the same effect.] Another way of seeing things is that if two variables are independent then they have zero correlation (though zero correlation does not imply independence !) but $X$ is perfectly correlated with itself, $\Corr(X,X)=1$ so $X$ can't be independent of itself. Note that since the covariance is given by $\Cov(X,Y)=\Corr(X,Y)\sqrt{\Var(X)\Var(Y)}$, then $$\Cov(X,X)=1\sqrt{\Var(X)^2}=\Var(X)$$ The more general formula for the variance of a sum of two random variables is $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ In particular, $\Cov(X,X) = \Var(X)$, so $$\Var(X+X) = \Var(X) + \Var(X) + 2\Var(X) = 4\Var(X)$$ which is the same as you would have deduced from applying the rule $$\Var(aX) = a^2 \Var(X) \implies \Var(2X) = 4\Var(X)$$ If you are interested in linearity, then you might be interested in the bilinearity of covariance. For random variables $W$, $X$, $Y$ and $Z$ (whether dependent or independent) and constants $a$, $b$, $c$ and $d$ we have $$\Cov(aW + bX, Y) = a \Cov(W,Y) + b \Cov(X,Y)$$ $$\Cov(X, cY + dZ) = c \Cov(X,Y) + d \Cov(X,Z)$$ and overall, $$\Cov(aW + bX, cY + dZ) = ac \Cov(W,Y) + ad \Cov(W,Z) + bc \Cov(X,Y) + bd \Cov(X,Z)$$ You can then use this to prove the (non-linear) results for variance that you wrote in your post: $$\Var(aX) = \Cov(aX, aX) = a^2 \Cov(X,X) = a^2 \Var(X)$$ $$ \begin{align} \Var(aX + bY) &= \Cov(aX + bY, aX + bY) \\ &= a^2 \Cov(X,X) + ab \Cov(X,Y) + ba \Cov (X,Y) + b^2 \Cov(Y,Y) \\ \Var(aX + bY) &= a^2 \Var(X) + b^2 \Var(Y) + 2ab \Cov(X,Y) \end{align} $$ The latter gives, as a special case when $a=b=1$, $$\Var(X+Y) = \Var(X) + \Var(Y) + 2 \Cov(X,Y)$$ When $X$ and $Y$ are uncorrelated (which includes the case where they are independent), then this reduces to $\Var(X+Y) = \Var(X) + \Var(Y)$. So if you want to manipulate variances in a "linear" way (which is often a nice way to work algebraically), then work with the covariances instead, and exploit their bilinearity.
{ "source": [ "https://stats.stackexchange.com/questions/185021", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/76744/" ] }
185,023
I hope I am making sense with the title. Often, the null hypothesis is formed with the intention of rejecting it. Is there a reason for this, or is it just a convention?
The purpose of statistical hypothesis testing is largely to impose self-skepticism, making us cautious about promulgating our hypothesis unless there is reasonable evidence to support it. Thus in the usual form of hypothesis testing the null hypothesis provides a "devils advocate" , arguing against us, and only promulgate our hypothesis if we can show that the observations mean that it is unlikely that the advocate's argument is sound. So we take $H_0$ to be the thing we don't want to be true and then see if we are able to reject it. If we can reject it, it doesn't mean that our hypothesis is likely to be correct, just that it has passed this basic hurdle and so is worthy of consideration. If we can't, it doesn't mean that our hypothesis is false, it may be that we just don't have enough data to provide suffcient evidence. As @Bahgat rightly suggests (+1) this is very much the idea of Popper's falsificationism idea. However, it is possible to have a test where $H_0$ is the thing you want to be true, but in order for that to work, you need to show that the test has sufficiently high statistical power in order to be confident of rejecting the null if it actually is false. Computing statistical power is rather more difficult that performing the test, which is why this form of testing is rarely used and the alternative where $H_0$ is what you don't want to be true is normally used instead. So you don't have to take $H_0$ to oppose your hypothesis, but it does make the testing procedure much easier.
{ "source": [ "https://stats.stackexchange.com/questions/185023", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/34645/" ] }
185,216
Factor analysis has several rotation methods, such as varimax, quartimax, equamax, promax, oblimin, etc. I am unable to find any information that relates their names to their actual mathematical or statistical doings. Why is it called "equa-max" or "quarti-max"? In what way are the axes or matrices rotated so they have such name? Unfortunately, most of them were invented in 1950s-1970s, so I cannot contact their authors.
This answer succeeds this general question on rotations in factor analysis (please read it) and briefly describes a number of specific methods. Rotations are performed iteratively and on every pair of factors (columns of the loading matrix). This is needed because the task to optimize ( max imize or min imize) the objective criterion simultaneously for all the factors would be mathematically difficult. However, in the end the final rotation matrix $\bf Q$ is assembled so that you can reproduce the rotation yourself with it, multiplying the extracted loadings $\bf A$ by it, $\bf AQ=S$ , getting the rotated factor structure matrix $\bf S$ . The objective criterion is some property of the elements (loadings) of resultant matrix $\bf S$ . Quartimax orthogonal rotation seeks to max imize the sum of all loadings raised to power 4 in $\bf S$ . Hence its name ("quarti", four). It was shown that reaching this mathematical objective corresponds enough to satisfying the 3rd Thurstone's criterion of "simple structure" which sounds as: for every pair of factors there is several (ideally >= m) variables with loadings near zero for any one of the two and far from zero for the other factor . In other words, there will be many large and many small loadings; and points on the loading plot drawn for a pair of rotated factors would, ideally, lie close to one of the two axes. Quartimax thus minimizes the number of factors needed to explain a variable : it "simplifies" the rows of the loading matrix. But quartimax often produces the so called "general factor" (which most of the time is not desirable in FA of variables; it is more desirable, I believe, in the so called Q-mode FA of respondents). Varimax orthogonal rotation tries to max imize variance of the squared loadings in each factor in $\bf S$ . Hence its name ( var iance). As the result, each factor has only few variables with large loadings by the factor . Varimax directly "simplifies" columns of the loading matrix and by that it greatly facilitates the interpretability of factors. On the loading plot, points are spread wide along a factor axis and tend to polarize themselves into near-zero and far-from-zero. This property seems to satisfy a mixture of Thurstones's simple structure points to an extent. Varimax, however, is not safe from producing points lying far away from the axes, i.e. "complex" variables loaded high by more than one factor. Whether this is bad or ok depends of the field of the study. Varimax performs well mostly in combination with the so called Kaiser's normalization (equalizing communalities temporarily while rotating), it is advised to always use it with varimax (and recommended to use it with any other method, too). It is the most popular orthogonal rotation method, especially in psychometry and social sciences. Equamax (rarely, Equimax) orthogonal rotation can be seen as a method sharpening some properties of varimax. It was invented in attempts to further improve it. Equa lization refers to a special weighting which Saunders (1962) introduced into a working formula of the algorithm. Equamax self-adjusts for the number of the being rotated factors. It tends to distribute variables (highly loaded) more uniformly between factors than varimax does and thus further is less prone to giving "general" factors. On the other hand, equamax wasn't conceived to give up the quartimax's aim to simplify rows; equamax is rather a combination of varimax and quartimax than their in-between. However, equamax is claimed to be considerably less "reliable" or "stable" than varimax or quartimax: for some data it can give disastrously bad solutions while for other data it gives perfectly interpretable factors with simple structure. One more method, similar to equamax and even more ventured in quest of simple structure is called parsimax ("maximizing parsimony") (See Mulaik, 2010, for discussion). I am sorry for stopping now and not reviewing the oblique methods - oblimin ("oblique" with "minimizing" a criterion) and promax (unrestricted pro crustes rotation after vari max ). The oblique methods would require probably longer paragraphs to describe them, but I didn't plan any long answer today. Both methods are mentioned in Footnote 5 of this answer . I may refer you to Mulaik, Foundations of factor analysis (2010); classic old Harman's book Modern factor analysis (1976); and whatever pops out in the internet when you search. See also The difference between varimax and oblimin rotations in factor analysis ; What does β€œvarimax” mean in SPSS factor analysis? Later addendum, with the history and the formulae, for meticulous Quartimax In the 1950s, several factor analysis experts tried to embody Thurstone’s qualitative features of a β€œsimple structure” (See footnote 1 here ) into strict, quantitative criteria: Ferguson reasoned that the most parsimonious disposition of points (loadings) in the space of factors (axes) will be when, for most pairs of factors, each of the two axes pierces its own clot of points, thus maximizing its own coordinates and minimizing the coordinates onto the perpendicular axis. So he suggested to minimize the products of loadings for each variable in pairs of factors (i,j), summed across all variables: $\sum^p\sum_{i,j;i<j}(a_i a_j)^2$ ( $a$ is a loading, an element of a p variables x m factors loading matrix $\bf A$ , in this case we mean, the final loadings - after a rotation). Carroll also thought of pairs of factors (i,j) and wanted to minimize $\sum_{i,j;i<j}\sum^p(a_i^2 a_j^2)$ . The idea was that for each pair of factors, the loadings should mostly be unequal-sized or both small, ideally a zero one against a nonzero or zero one. Neuhaus and Wrigley wanted to maximize the variance of the squared values of loadings in the whole $\bf A$ , in order the loadings to split themselves into big ones and near-zero ones. Kaiser also chose variance, but variance of the squared loadings in rows of $\bf A$ ; and wanted to maximize the sum of these variances across the rows. Saunders offered to maximize the kurtosis in the doubled distribution of the loadings (i.e., every loading from $\bf A$ is taken twice - with positive and with negative sign, since the sign of a loading is basically arbitrary). High kurtosis in this symmetric around zero distribution implies maximization of the share (contribution) of extreme (big) loadings as well of near-zero loadings, at the expense of the moderate-size loadings. It then occurred (and it can be shown mathematically) that, in the milieu of orthogonal rotation, the optimization of all these five criteria is in fact equivalent from the β€œargmax” point of view, and they all can boil down to the maximization of $Q= \sum^p\sum^m a^4$ , the overall sum of the 4-th power of loadings. The criterion therefore was called the quartimax . To repeat what was said in the beginning of the answer, quartimax minimizes the number of factors needed to explain a variable: it "simplifies" the rows of the loading matrix. But quartimax not rarely produces the so called "general factor". Varimax Having observed that quartimax simplifies well rows (variables) but is prone to β€œgeneral factor”, Kaiser suggested to simplify $\bf A$ ’s columns (factors) instead. It was put above, that Kaiser’s idea for quartimax was to maximize the summed variance of squared loadings in rows of $\bf A$ . Now he transposed the proposal and suggested to maximize the summed variance of squared loadings in columns of $\bf A$ . That is, to maximize $\sum^m[\frac{1}{p} \sum^p (a^2)^2 - \frac{1}{p^2} (\sum^p a^2)^2]$ (the bracketed part is the formula of the variance of p squared values $a$ ), or, if multiplied by $p^2$ , for convenience: $V = \sum^m[p \sum^p (a^2)^2 - (\sum^p a^2)^2] = p \sum^m\sum^p (a^4) - \sum^m(\sum^p a^2)^2 = pQ - W$ where $V$ is the varimax criterion, $Q$ is the quartimax criterion, and $W$ is the sum of squared variances of the factors (after the rotation) [a factor's variance is the sum of its squared loadings]. [I’ve remarked that Kaiser obtained varimax by simply transposing the quartimax’s problem - to simplify columns in place of rows, - and you may switch places of m and p in the formula for $V$ , to get the symmetric corresponding expression, $mQ – W^*$ , for quartimax. Since we are rotating columns, not rows, of the loading matrix, the quartimax’s term $W^*$ , the sum of squared communalities of the variables, does not change with rotation and therefore can be dropped from the objective statement; after which can also drop multiplier m - and stay with sole $Q$ , what quartimax is. While in case of varimax, term $W$ changes with rotations and thus stays an important part of the formula, to be optimized along with it.] Kaiser normalization . Kaiser felt dissatisfied with that variables with large communalities dictate the rotation by $V$ criterion much more than variables with small communalities. So he introduced normalizing all communalities to unit before launching the procedure maximizing $V$ (and, of course, de-normalizing back after the performed rotation - communalities don’t change in an orthogonal rotation). Per tradition, Kaiser normalization is often recommended to do – mainly with varimax, but sometimes along with quartimax and other rotation methods too, because, logically, it is not tied with varimax solely. Whether the trick is really beneficial, is an unsettled issue. Some software do it by default, some – by default only for varimax, still some – don’t set it to be a default option. (In the end of this answer, I have a remark on the normalization.) So was varimax, who maximizes variances of squared loadings in columns of $\bf A$ and therefore simplifies the factors – in exact opposition to quartimax, who did that in rows of $\bf A$ , simplifying the variables. Kaiser demonstrated that, if the population factor structure is relatively sharp (i.e., variables tend to cluster together around different factors), varimax is more robust (stable) than quartimax to removal of some variables from the rotation operation. Equamax and Parsimax Saunders decided to play up the fact that quartimax and varimax are actually one formula, $pQ - cW$ , where $c=0$ (and then p traditionally is dropped) for quartimax and $c=1$ for varimax. He experimented with factor analytic data in the search of a greater value for coefficient $c$ in order to accentuate the varimaxian, non-quartimaxian side of the criterion. He found that $c=m/2$ often produces factors that are more interpretable than after varimax or quartimax rotations. He called $pQ – \frac{m}{2}W$ equamax . The rationale to make $c$ dependent on m was that as the number of factors grows while p does not, the a priori expected proportion of variables to be loaded by any one factor diminishes; and to compensate it, we should raise $c$ . In a similar pursuit of further β€œbettering” the generic criterion, Crawford arrived at yet another coefficient value, $c = p(mβˆ’1)/(p+mβˆ’2)$ , depending both on m and p . This version of the criterion was named parsimax . It is possible further to set $c=p$ , yielding criterion facpars , β€œfactor parsimony”, which, as I’m aware, is very seldom used. (I think) It is still an open question if equamax or parsimax are really better than varimax, and if yes, then in what situations. Their dependence on the parameters m (and p ) makes them self-tuning (for advocates) or capricious (for critics). Well, from purely math or general data p.o.v., raising $c$ means simply pushing factors in the direction of more equal final variances, - and not at all making the criterion β€œmore varimax than varimax” or β€œbalanced between varimax and quartimax” w.r.t. their objective goals, for both varimax and quartimax optimize well to the limit what they were meant to optimize. The considered generic criterion of the form $pQ - cW$ (where Q is quartimax, $\sum^p\sum^m a^4$ , and W is the sum of squared factor variances, $\sum^m(\sum^p a^2)^2$ , is known as orthomax . Quartimax, varimax, equamax, parsimax, and facpars are its particular versions. In general, coefficient $c$ can take on any value. When close to +infinity , it produces factors of completely equal variances (so use that, if your aim is such). When close to -infinity , you get loadings equal to what you get if you rotate your loading matrix into its principal components by means of PCA (without centering the columns). So, value of $c$ is the parameter stretching the dimension β€œgreat general factor vs all factors equal strength”. In their important paper of 1970, Crawford & Ferguson extend the varying $c$ criterion over to the case of nonorthogonal factor rotations (calling that more general coefficient kappa). Literature Harman, H.H. Modern factor analysis. 1976. Mulaik, S.A. Foundations of factor analysis. 2010. Clarkson, D.B. Quartic rotation criteria and algorithms // Psychometrica, 1988, 53, 2, p. 251-259. Crawford, C.B., Ferguson, G.A. A general rotation criterion and its use in orthogonal rotation // Psychometrica, 1970, 35, 3, p. 321-332. Comparing main characteristics of the criteria I’ve been generating p variables x m factors loading matrices as values from uniform distribution (so yes, that was not a sharp, clean factor structure), 50 matrices for each combination of p and m/p proportion, and rotating each loading matrix by quartimax (Q), varimax (V), equamax (E), parsimax (P), and facpars (F), all methods accompanied by Kaiser normalization. Quartimax (Q0) and varimax (V0) were also tried without Kaiser normalization. Comparisons between the criteria on three characteristics of the rotated matrix are displayed below (for each matrix generated, the 7 values of the post-rotational characteristic were rescaled into the 0-1 range; then means across the 50 simulations and 95% CI are plotted). Fig.1. Comparing the sum of variances of squared loadings in rows (maximizing this is the quartimax’s prerogative): Comment: Superiority of quartimax over the other criteria tend to grow as p increases or as m/p increases. Varimax most of the time is second best. Equamax and parsimax are quite similar. Fig.2. Comparing the sum of variances of squared loadings in columns (maximizing this is the varimax’s prerogative): Comment: Superiority of varimax over the other criteria tend to grow as p increases or as m/p increases. Quartimax’s tendency is opposite: as the parameters increase it loses ground. In the bottom-right part, quartimax is the worst, that is, with large-scale factor analysis it fails to mimic β€œvarimaxian” job. Equamax and parsimax are quite similar. Fig.3. Comparing inequality of factor variances (this is driven by coefficient $c$ ); the variance used as the measure of β€œinequality”: Comment: Yes, with growing $c$ , that is, in the line Q V E P F, the inequality of factor variances falls. Q is the leader of the inequality, which tells of its propensity for β€œgeneral factor”, and at that its gap with the other criteria enlarges as p grows or m/p grows. Comparing inequality of factor variances (this is driven by coefficient $c$ ); proportion β€œsum of absolute loadings of the strongest factor / average of such sums across the rest m-1 factors” was used as the measure of β€œinequality”: This is another and more direct test for the presence of β€œgeneral factor”. The configuration of results was almost the same as on the previous picture Fig.3 , so I’m not showing a picture. Disclaimer. These tries, on which the above pics are based, were done on loading matrices with random nonsharp factor structures, i.e. there were no specially preset clear clusters of variables or other specific structure among the loadings. Kaiser normalization . From the above Fig.1-2 one can learn that versions of quartimax and varimax without the normalization perform the two tasks (the maximizations) markedly better than when accompanied by the normalization. At the same time, absence of the normalization is a little bit more prone to β€œgeneral factor” ( Fig.3 ). The question whether Kaiser normalization should be used (and when), seems still open to me. Perhaps one should try both, with and without the normalization, and see where the applied factor interpretation was more satisfying. When we don’t know what to choose based on math grounds, it’s time we resort to β€œphilosophical” consideration, what are set contrasted, as usual. I could imagine of two positions: Contra normalization. A variable with small communality (high uniqueness) is not much helpful with any rotation. It contains only traces of the totality of the m factors, so lacks a chance to get a large loading of any of them. But we are interpreting factors mostly by large loadings, and the smaller is the loading the harder is to sight the essence of the factor in the variable. It would be justified even to exclude a variable with small communality from the rotation. Kaiser normalization is what is counter-directed to such motive/motif. Pro normalization. Communality (non-uniqueness) of a variable is the amount of its inclination to the space of m factors from the outside (i.e., it is the magnitude of its projection into that space). Rotation of axes inside that space is not related with that inclination. The rotation – solving the question which of the m factors will and which will not load the variable – concerns equally a variable will any size of communality, because the initial suspense of the said β€œinternal” decision is sharp to the same degree to all variables with their β€œexternal” inclination. So, as long as we are choosing to speak of the variables and not their projections inside, there’s no reason to spread them weights depending on their inclinations, in the act of rotation. And, to manage to discern the essence of a factor in the variable under any size of the loading – is a desideratum (and theoretically a must) for an interpreter of factors. Orthogonal analytic rotations (Orthomax) algorithm pseudocode Shorthand notation: * matrix multiplication (or simple multiplication, for scalars) &* elementwise (Hadamard) multiplication ^ exponentiation of elements sqrt(M) square roots of elements in matrix M rsum(M) row sums of elements in matrix M csum(M) column sums of elements in matrix M rssq(M) row sums of squares in matrix M, = rsum(M^2) cssq(M) column sums of squares in matrix M, = csum(M^2) msum(M) sum of elements in matrix M make(nr,nc,val) create nr x nc matrix populated with value val A is p x m loading matrix with m orthogonal factors, p variables If Kaiser normalization is requested: h = sqrt(rssq(A)). /*sqrt(communalities), column vector A = A/(h*make(1,m,1)). /*Bring all communalities to unit R is the orthogonal rotation matrix to accrue: Initialize it as m x m identity matrix Compute the initial value of the criterion Crit; the coefficient c is: 0 for Quartimax, 1 for Varimax, m/2 for Equamax, p(m-1)/(p+m-2) for Parsimax, p for Facpars; or you may choose arbitrary c: Q = msum(A^4) If β€œQuartimax” Crit = Q Else W = rssq(cssq(A)) Crit = p*Q – c*W Begin iterations For each pair of factors (columns of A) i, j (i<j) do: ai = A(:,i) /*Copy out the aj = A(:,j) /*two factors u = ai^2 – aj^2 v = 2 * ai &* aj @d = 2 * csum(u &* v) @c = csum(u^2 – v^2) @a = csum(u) @b = csum(v) Compute the angle Phi of rotation of the two factors in their space (coefficient c as defined above): num = @d – c * 2*@a*@b/p den = @c – c * (@a^2 - @b^2)/p Phi4 = artan(num/den) /*4Phi (in radians) If den>0 /*4Phi is in the 1st or the 4th quadrant Phi = Phi4/4 Else if num>0 /*4Phi is in the 2nd quadrant (pi is the pi value) Phi = (pi + Phi4)/4 Else /*4Phi is in the 3rd quadrant Phi = (Phi4 – pi)/4 Perform the rotation of the pair (rotate if Phi is not negligible): @sin = sin(Phi) @cos = cos(Phi) r_ij = {@cos,-@sin;@sin,@cos} /*The 2 x 2 rotation matrix A(:,{i,j}) = {ai,aj} * r_ij /*Rotate factors (columns) i and j in A R(:,{i,j}) = R(:,{i,j}) * r_ij /*Update also the columns of the being accrued R Go to consider next pair of factors i, j, again copying them out, etc. When all pairs are through, compute the criterion: Crit = … (see as defined above) End iterations if Crit has stopped growing any much (say, increase not greater than 0.0001 versus the previous iteration), or the stock of iterations (say, 50) exhausted. If Kaiser normalization was requested: A = A &* (h*make(1,m,1)) /*De-normalize Ready. A has been rotated. A(input)*R = A(output) Optional post-actions, for convenience: 1) Reorder factors by decreasing their variances (i.e., their cssq(A)). 2) Switch sign of the loadings so that positive loadings prevail in each factor. Quartimax and Varimax are always positive values; others can be negative. All the criteria grow on iterations.
{ "source": [ "https://stats.stackexchange.com/questions/185216", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/212256/" ] }
185,378
Suppose you have a single cross-section of data where individuals are located within groups (e.g. students within schools) and you wish to estimate a model of the form Y_i = a + B*X_i where X is a vector of individual level characteristics and a a constant. In this case, suppose unobserved between-group heterogeneity biases your point estimates of B and their SEs since it is correlated with your independent variable of interest. One option is to cluster your SEs by groups (schools). Another is to include group FEs. Another is to use both. What should one consider when choosing between these options? It is particularly unclear why one might cluster SEs by group AND use group FE. In my specific case, I have 35 groups and 5,000 individuals nested within each group. I have followed the discussion in this pdf , but it is not very clear on why and when one might use both clustered SEs and fixed effects. (Please discuss the pros and cons of clustered SEs vs. FEs instead of suggesting I just fit a multilevel model.)
Both approaches, using group fixed effects and/or cluster-adjusted standard error take into account different issues related to clustered (or panel) data and I would clearly view them as distinct approaches. Often you want to use both of them: First of all, cluster-adjusted standard error account for within-cluster correlation or heteroscedasticity which the fixed-effects estimator does not take into account unless you are willing to make further assumptions, see the Imbens and Wooldridge lecture slides for an good discussion of short and long panels and various issues related to this problem . There is also a novel paper about this topic by Cameron and Miller: A Practitioner's Guide to Cluster-Robust Inference which might be interesting for you. If you do not want to model the variance-covariance matrix and you suspect that within-cluster correlation is present, I advise to use cluster robust standard error because the bias in your SE may be severe (much more problematic than for heteroscedasticity, see Angrist & Pischke Chapter III.8 for a discussion of this topic. But you need enough cluster (Angrist and Pischke say 40-50 as a role of thumb). Cluster-adjusted standard error take into account standard error but leave your point estimates unchanged (standard error will usually go up)! Fixed-effects estimation takes into account unobserved time-invariant heterogeneity (as you mentioned). This can be good or bad: On the hand, you need less assumptions to get consistent estimations. On the other hand, you throw away a lot of variance which might be useful. Some people like Andrew Gelman prefer hierarchical modeling to fixed effects but here opinions differ. Fixed-effects estimation will change both, point and interval estimates (also here standard error will usually be higher). So to sum up: Cluster-robust standard error are an easy way to account for possible issues related to clustered data if you do not want to bother with modeling inter- and intra-cluster correlation (and there are enough clusters available). Fixed-effects estimation will take use only certain variation, so it depends on your model whether you want to make estimates based on less variation or not. But without further assumptions fixed-effects estimation will not take care of the problems related to intra-cluster correlation for the variance matrix. Neither will cluster-robust standard error take into account problems related to the use of fixed-effects estimation.
{ "source": [ "https://stats.stackexchange.com/questions/185378", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/97290/" ] }
185,491
I am currently struggling with finding the right model for difficult count data (dependent variable). I have tried various different models (mixed effects models are necessary for my kind of data) such as lmer and lme4 (with a log transform) as well as generalized linear mixed effects models with various families such as Gaussian or negative binomial. However, I am quite unsure on how to correctly diagnose the resulting fits. I found a lot of different opinions on that topic on the Web. I think diagnostics on linear (mixed) regression are quite straight-forward. You can go ahead and analyse the residuals (normality) as well as study heteroscedasticity by plotting fitted values compared to residuals. However, how do you properly do that for the generalized version? Let us focus on a negative binomial (mixed) regression for now. I have seen quite opposing statements regarding the residuals here: In Checking residuals for normality in generalised linear models it is pointed out in the first answer that the plain residuals are not normally distributed for a GLM; I think this is clear. However, then it is pointed out that Pearson and deviance residuals are also not supposed to be normal. Yet, the second answer states that deviance residuals should be normally distributed (combined with a reference). That deviance residuals should be normally distributed is hinted at in the documentation for ?glm.diag.plots (from R's boot package), though. In this blog post , the author first studied normality of what I assume are Pearson residuals for a NB mixed-effects regression model. As expected (in my honest opinion) the residuals did not show to be normal and the author assumed this model to be a bad fit. However, as stated in the comments, the residuals should be distributed according to a negative binomial distribution. In my opinion, this comes closest to the truth as GLM residuals can have other distributions than the normal one. Is this correct? How to check for things like heteroscedasticity here? The last point (plotting residuals against quantiles of the estimated distribution) is emphasized in Ben & Yohai (2004) . Currently, this seems the way to go for me. In a nutshell: How do you properly study the model fits of generalized linear (mixed) regression models specifically with a focus on residuals?
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution . Since the article is not open access (although searching for it on Google scholar may prove successful, I thought I cite important passages that may be helpful to address parts of the questions. So again, it's not what I came up with myself but I think it represents the best condensed information on GLMMs (inlcuding diagnostics) out there in a very straight forward and easy to understand style of writing. If by any means this answer is not suitable for whatever reason, I will simply delete it. Things that I find useful with respect to questions regarding diagnostics are highlighted in bold . Page 127: Researchers faced with nonnormal data often try shortcuts such as transforming data to achieve normality and homogeneity of variance, using nonparametric tests or relying on the robustness of classical ANOVA to nonnormality for balanced designs [15]. They might ignore random effects altogether (thus committing pseudoreplication) or treat them as fixed factors [16]. However, such shortcuts can fail (e.g. count data with many zero values cannot be made normal by transformation). Even when they succeed, they might violate statistical assumptions (even nonparametric tests make assumptions, e.g. of homogeneity of variance across groups) or limit the scope of inference (one cannot extrapolate estimates of fixed effects to new groups). Instead of shoehorning their data into classical statistical frameworks, researchers should use statistical approaches that match their data. Generalized linear mixed models (GLMMs) combine the properties of two statistical frameworks that are widely used in ecology and evolution, linear mixed models (which incorporate random effects) and generalized linear models (which handle nonnormal data by using link functions and exponential family [e.g. normal, Poisson or binomial] distributions). GLMMs are the best tool for analyzing nonnormal data that involve random effects: all one has to do, in principle, is specify a distribution, link function and structure of the random effects. Page 129, Box 1: The residuals indicated overdispersion , so we refitted the data with a quasi-Poisson model. Despite the large estimated scale parameter (10.8), exploratory graphs found no evidence of outliers at the level of individuals, genotypes or populations. We used quasi-AIC (QAIC), using one degree of freedom for random effects [49], for randomeffect and then for fixed-effect model selection. Page 133, Box 4: Here we outline a general framework for constructing a full (most complex) model, the first step in GLMM analysis. Following this process, one can then evaluate parameters and compare submodels as described in the main text and in Figure 1. Specify fixed (treatments or covariates) and random effects (experimental, spatial or temporal blocks, individuals, etc.). Include only important interactions. Restrict the model a priori to a feasible level of complexity, based on rules of thumb (>5–6 random-effect levels per random effect and >10–20 samples per treatment level or experimental unit) and knowledge of adequate sample sizes gained from previous studies [64,65]. Choose an error distribution and link function (e.g. Poisson distribution and log link for count data, binomial distribution and logit link for proportion data). Graphical checking : are variances of data (transformed by the link function) homogeneous across categories? Are responses of transformed data linear with respect to continuous predictors? Are there outlier individuals or groups? Do distributions within groups match the assumed distribution? Fit fixed-effect GLMs both to the full (pooled) data set and within each level of the random factors [28,50]. Estimated parameters should be approximately normally distributed across groups (group-level parameters can have large uncertainties, especially for groups with small sample sizes). Adjust model as necessary (e.g. change link function or add covariates). Fit the full GLMM. Insufficient computer memory o r too slow: reduce model complexity. If estimation succeeds on a subset of the data, try a more efficient estimation algorithm (e.g. PQL if appropriate). Failure to converge (warnings or errors): reduce model complexity or change optimization settings (make sure the resulting answers make sense). Try other estimation algorithms. Zero variance components or singularity (warnings or errors): check that the model is properly defined and identifiable (i.e. all components can theoretically be estimated). Reduce model complexity. Adding information to the model (additional covariates, or new groupings for random effects) can alleviate problems, as will centering continuous covariates by subtracting their mean [50]. If necessary, eliminate random effects from the full model, dropping (i) terms of less intrinsic biological interest, (ii) terms with very small estimated variances and/or large uncertainty, or (iii) interaction terms. (Convergence errors or zero variances could indicate insufficient data.) Recheck assumptions for the final model (as in step 3) and check that parameter estimates and confidence intervals are reasonable (gigantic confidence intervals could indicate fitting problems). The magnitude of the standardized residuals should be independent of the fitted values. Assess overdispersion (the sum of the squared Pearson residuals should be $\chi^2$ distributed [66,67]). If necessary, change distributions or estimate a scale parameter. Check that a full model that includes dropped random effects with small standard deviations gives similar results to the final model. If different models lead to substantially different parameter estimates, consider model averaging. Residuals plots should be used to assess overdispersion and transformed variances should be homogeneous across categories. Nowhere in the article was mentioned that residuals are supposed to be normally distributed. I think the reason why there are contrasting statements reflects that GLMMs (page 127-128)... ...are surprisingly challenging to use even for statisticians. Although several software packages can handle GLMMs (Table 1), few ecologists and evolutionary biologists are aware of the range of options or of the possible pitfalls. In reviewing papers in ecology and evolution since 2005 found by Google Scholar, 311 out of 537 GLMM analyses (58%) used these tools inappropriately in some way (see online supplementary material). And here are a few full worked examples using GLMMs including diagnostics. I realize that this answer is more like a comment and should be treated as such. But the comment section doesn't allow me to add such a long comment. Also since I believe this paper is of value for this discussion (but unfortunately behind a pay-wall), I thought it would be useful to quote important passages here. Cited papers: [15] - G.P. Quinn, M.J. Keough (2002): Experimental Design and Data Analysis for Biologists, Cambridge University Press. [16] - M.J. Crawley (2002): Statistical Computing: An Introduction to Data Analysis Using S-PLUS, John Wiley & Sons. [28] - J.C. Pinheiro, D.M. Bates (2000): Mixed-Effects Models in S and S-PLUS, Springer. [49] - F. Vaida, S. Blanchard (2005): Conditional Akaike information for mixed-effects models. Biometrika, 92, pp. 351–370. [50] - A. Gelman, J. Hill (2006): Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press. [64] - N.J. Gotelli, A.M. Ellison (2004): A Primer of Ecological Statistics, Sinauer Associates. [65] - F.J. Harrell (2001): Regression Modeling Strategies, Springer. [66] - J.K. Lindsey (1997): Applying Generalized Linear Models, Springer. [67] - W. Venables, B.D. Ripley (2002): Modern Applied Statistics with S, Springer.
{ "source": [ "https://stats.stackexchange.com/questions/185491", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/19657/" ] }
185,507
Suppose we have data set $(X_i,Y_i)$ with $n$ points. We want to perform a linear regression, but first we sort the $X_i$ values and the $Y_i$ values independently of each other, forming data set $(X_i,Y_j)$ . Is there any meaningful interpretation of the regression on the new data set? Does this have a name? I imagine this is a silly question so I apologize, I'm not formally trained in statistics. In my mind this completely destroys our data and the regression is meaningless. But my manager says he gets "better regressions most of the time" when he does this (here "better" means more predictive). I have a feeling he is deceiving himself. EDIT: Thank you for all of your nice and patient examples. I showed him the examples by @RUser4512 and @gung and he remains staunch. He's becoming irritated and I'm becoming exhausted. I feel crestfallen. I will probably begin looking for other jobs soon.
I'm not sure what your boss thinks "more predictive" means. Many people incorrectly believe that lower $p$-values mean a better / more predictive model. That is not necessarily true (this being a case in point). However, independently sorting both variables beforehand will guarantee a lower $p$-value. On the other hand, we can assess the predictive accuracy of a model by comparing its predictions to new data that were generated by the same process. I do that below in a simple example (coded with R ). options(digits=3) # for cleaner output set.seed(9149) # this makes the example exactly reproducible B1 = .3 N = 50 # 50 data x = rnorm(N, mean=0, sd=1) # standard normal X y = 0 + B1*x + rnorm(N, mean=0, sd=1) # cor(x, y) = .31 sx = sort(x) # sorted independently sy = sort(y) cor(x,y) # [1] 0.309 cor(sx,sy) # [1] 0.993 model.u = lm(y~x) model.s = lm(sy~sx) summary(model.u)$coefficients # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.021 0.139 0.151 0.881 # x 0.340 0.151 2.251 0.029 # significant summary(model.s)$coefficients # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.162 0.0168 9.68 7.37e-13 # sx 1.094 0.0183 59.86 9.31e-47 # wildly significant u.error = vector(length=N) # these will hold the output s.error = vector(length=N) for(i in 1:N){ new.x = rnorm(1, mean=0, sd=1) # data generated in exactly the same way new.y = 0 + B1*x + rnorm(N, mean=0, sd=1) pred.u = predict(model.u, newdata=data.frame(x=new.x)) pred.s = predict(model.s, newdata=data.frame(x=new.x)) u.error[i] = abs(pred.u-new.y) # these are the absolute values of s.error[i] = abs(pred.s-new.y) # the predictive errors }; rm(i, new.x, new.y, pred.u, pred.s) u.s = u.error-s.error # negative values means the original # yielded more accurate predictions mean(u.error) # [1] 1.1 mean(s.error) # [1] 1.98 mean(u.s<0) # [1] 0.68 windows() layout(matrix(1:4, nrow=2, byrow=TRUE)) plot(x, y, main="Original data") abline(model.u, col="blue") plot(sx, sy, main="Sorted data") abline(model.s, col="red") h.u = hist(u.error, breaks=10, plot=FALSE) h.s = hist(s.error, breaks=9, plot=FALSE) plot(h.u, xlim=c(0,5), ylim=c(0,11), main="Histogram of prediction errors", xlab="Magnitude of prediction error", col=rgb(0,0,1,1/2)) plot(h.s, col=rgb(1,0,0,1/4), add=TRUE) legend("topright", legend=c("original","sorted"), pch=15, col=c(rgb(0,0,1,1/2),rgb(1,0,0,1/4))) dotchart(u.s, color=ifelse(u.s<0, "blue", "red"), lcolor="white", main="Difference between predictive errors") abline(v=0, col="gray") legend("topright", legend=c("u better", "s better"), pch=1, col=c("blue","red")) The upper left plot shows the original data. There is some relationship between $x$ and $y$ (viz., the correlation is about $.31$.) The upper right plot shows what the data look like after independently sorting both variables. You can easily see that the strength of the correlation has increased substantially (it is now about $.99$). However, in the lower plots, we see that the distribution of predictive errors is much closer to $0$ for the model trained on the original (unsorted) data. The mean absolute predictive error for the model that used the original data is $1.1$, whereas the mean absolute predictive error for the model trained on the sorted data is $1.98$β€”nearly twice as large. That means the sorted data model's predictions are much further from the correct values. The plot in the lower right quadrant is a dot plot. It displays the differences between the predictive error with the original data and with the sorted data. This lets you compare the two corresponding predictions for each new observation simulated. Blue dots to the left are times when the original data were closer to the new $y$-value, and red dots to the right are times when the sorted data yielded better predictions. There were more accurate predictions from the model trained on the original data $68\%$ of the time. The degree to which sorting will cause these problems is a function of the linear relationship that exists in your data. If the correlation between $x$ and $y$ were $1.0$ already, sorting would have no effect and thus not be detrimental. On the other hand, if the correlation were $-1.0$, the sorting would completely reverse the relationship, making the model as inaccurate as possible. If the data were completely uncorrelated originally, the sorting would have an intermediate, but still quite large, deleterious effect on the resulting model's predictive accuracy. Since you mention that your data are typically correlated, I suspect that has provided some protection against the harms intrinsic to this procedure. Nonetheless, sorting first is definitely harmful. To explore these possibilities, we can simply re-run the above code with different values for B1 (using the same seed for reproducibility) and examine the output: B1 = -5 : cor(x,y) # [1] -0.978 summary(model.u)$coefficients[2,4] # [1] 1.6e-34 # (i.e., the p-value) summary(model.s)$coefficients[2,4] # [1] 1.82e-42 mean(u.error) # [1] 7.27 mean(s.error) # [1] 15.4 mean(u.s<0) # [1] 0.98 B1 = 0 : cor(x,y) # [1] 0.0385 summary(model.u)$coefficients[2,4] # [1] 0.791 summary(model.s)$coefficients[2,4] # [1] 4.42e-36 mean(u.error) # [1] 0.908 mean(s.error) # [1] 2.12 mean(u.s<0) # [1] 0.82 B1 = 5 : cor(x,y) # [1] 0.979 summary(model.u)$coefficients[2,4] # [1] 7.62e-35 summary(model.s)$coefficients[2,4] # [1] 3e-49 mean(u.error) # [1] 7.55 mean(s.error) # [1] 6.33 mean(u.s<0) # [1] 0.44
{ "source": [ "https://stats.stackexchange.com/questions/185507", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/97372/" ] }
185,639
LSTM was invented specifically to avoid the vanishing gradient problem. It is supposed to do that with the Constant Error Carousel (CEC), which on the diagram below (from Greff et al. ) correspond to the loop around cell . (source: deeplearning4j.org ) And I understand that that part can be seen as a sort of identity function, so the derivative is one and the gradient stays constant. What I don't understand is how it does not vanish due to the other activation functions ? The input, output and forget gates use a sigmoid, which derivative is at most 0.25, and g and h were traditionally tanh . How does backpropagating through those not make the gradient vanish ?
The vanishing gradient is best explained in the one-dimensional case. The multi-dimensional is more complicated but essentially analogous. You can review it in this excellent paper [1]. Assume we have a hidden state $h_t$ at time step $t$. If we make things simple and remove biases and inputs, we have $$h_t = \sigma(w h_{t-1}).$$ Then you can show that \begin{align} \frac{\partial h_{t'}}{\partial h_t} &= \prod_{k=1}^{t' - t} w \sigma'(w h_{t'-k})\\ &= \underbrace{w^{t' - t}}_{!!!}\prod_{k=1}^{t' - t} \sigma'(w h_{t'-k}) \end{align} The factored marked with !!! is the crucial one. If the weight is not equal to 1, it will either decay to zero exponentially fast in $t'-t$, or grow exponentially fast . In LSTMs, you have the cell state $s_t$. The derivative there is of the form $$\frac{\partial s_{t'}}{\partial s_t} = \prod_{k=1}^{t' - t} \sigma(v_{t+k}).$$ Here $v_t$ is the input to the forget gate. As you can see, there is no exponentially fast decaying factor involved. Consequently, there is at least one path where the gradient does not vanish. For the complete derivation, see [2]. [1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." ICML (3) 28 (2013): 1310-1318. [2] Bayer, Justin Simon. Learning Sequence Representations. Diss. MΓΌnchen, Technische UniversitΓ€t MΓΌnchen, Diss., 2015, 2015.
{ "source": [ "https://stats.stackexchange.com/questions/185639", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/73507/" ] }
185,652
How can I, using the correlograms above, specify the orders of the ARIMA model? These are the pac an ac of the differenced time series. Using AIC and BIC, I can't seem te find a proper model. +-----------+ | rGDPs | |-----------| 1. | . | 2. | .0105743 | 3. | .0077057 | 4. | .0120554 | 5. | .0089645 | |-----------| 6. | .0066547 | 7. | .0106363 | 8. | .0049944 | 9. | .0050993 | 10. | .00879 | |-----------| 11. | .0043526 | 12. | .0086107 | 13. | .008379 | 14. | .0076342 | 15. | .0057287 | |-----------| 16. | .0104761 | 17. | .0083132 | 18. | .0046806 | 19. | .0131731 | 20. | .0041189 | |-----------| 21. | .0096865 | 22. | .010355 | 23. | .0078735 | 24. | .0118189 | 25. | .0094376 | |-----------| 26. | .0110931 | 27. | .0088711 | 28. | .0094872 | 29. | .0087013 | 30. | .0075502 | |-----------| 31. | .0077829 | 32. | .0065832 | 33. | .0050039 | 34. | -.0002508 | 35. | -.007906 | |-----------| 36. | -.0107899 | 37. | -.0171785 | 38. | -.0105772 | 39. | -.0032196 | 40. | -.0009422 | |-----------| 41. | .0010233 | 42. | .0019455 | 43. | -.0002184 | 44. | .0023556 | 45. | .0018158 | |-----------| 46. | -.0011978 | 47. | -.0032644 | 48. | -.0037203 | 49. | -.0042601 | 50. | -.0049648 | |-----------| 51. | -.0038538 | 52. | -.0077257 | 53. | -.0031233 | 54. | -.0013533 | 55. | .0007629 | |-----------| 56. | .0017443 | +-----------+ These are the observations on rGDPs, quarterly data. rGDPs is already the growth rate of gdp: calculated using logs: rGDPs = log(GDPs) - log(GDPs[n-1]) Edit: Here is my raw data, where the indicators n and s represent two countries. In this question, all I posted was about 's'. http://pastebin.com/4LbkgmEr
The vanishing gradient is best explained in the one-dimensional case. The multi-dimensional is more complicated but essentially analogous. You can review it in this excellent paper [1]. Assume we have a hidden state $h_t$ at time step $t$. If we make things simple and remove biases and inputs, we have $$h_t = \sigma(w h_{t-1}).$$ Then you can show that \begin{align} \frac{\partial h_{t'}}{\partial h_t} &= \prod_{k=1}^{t' - t} w \sigma'(w h_{t'-k})\\ &= \underbrace{w^{t' - t}}_{!!!}\prod_{k=1}^{t' - t} \sigma'(w h_{t'-k}) \end{align} The factored marked with !!! is the crucial one. If the weight is not equal to 1, it will either decay to zero exponentially fast in $t'-t$, or grow exponentially fast . In LSTMs, you have the cell state $s_t$. The derivative there is of the form $$\frac{\partial s_{t'}}{\partial s_t} = \prod_{k=1}^{t' - t} \sigma(v_{t+k}).$$ Here $v_t$ is the input to the forget gate. As you can see, there is no exponentially fast decaying factor involved. Consequently, there is at least one path where the gradient does not vanish. For the complete derivation, see [2]. [1] Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. "On the difficulty of training recurrent neural networks." ICML (3) 28 (2013): 1310-1318. [2] Bayer, Justin Simon. Learning Sequence Representations. Diss. MΓΌnchen, Technische UniversitΓ€t MΓΌnchen, Diss., 2015, 2015.
{ "source": [ "https://stats.stackexchange.com/questions/185652", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/97175/" ] }
185,683
Supppse $X$ and $Y$ are standard uniformly distributed in $[0, 1]$, and they are independent, what is the PDF of $Z = Y / X$? The answer from some probability theory textbook is $$ f_Z(z) = \begin{cases} 1/2, & \text{if } 0 \le z \le 1 \\ 1/(2z^2), & \text{if } z > 1 \\ 0, & \text{otherwise}. \end{cases} $$ I am wondering, by symmetry, shouldn't $f_Z(1/2) = f_Z(2)$? This is not the case according to the PDF above.
The right logic is that with independent $X, Y \sim U(0,1)$, $Z=\frac YX$ and $Z^{-1} =\frac XY$ have the same distribution and so for $0 < z < 1$ \begin{align} P\left\{\frac YX \leq z\right\} &= P\left\{\frac XY \leq z\right\}\\ &= P\left\{\frac YX \geq \frac 1z \right\}\\ \left.\left.F_{Z}\right(z\right) &= 1 - F_{Z}\left(\frac 1z\right) \end{align} where the equation with CDFs uses the fact that $\frac YX$ is a continuous random variable and so $P\{Z \geq a\} = P\{Z > a\} = 1-F_Z(a)$. Hence the pdf of $Z$ satisfies $$f_Z(z) = z^{-2}f_Z(z^{-1}), \quad 0 < z < 1.$$ Thus $f_Z(\frac 12) = 4f_Z(2)$, and not $f_Z(\frac 12) = f_Z(2)$ as you thought it should be.
{ "source": [ "https://stats.stackexchange.com/questions/185683", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7787/" ] }
186,097
I've been following Kaggle competitions for a long time and I come to realize that many winning strategies involve using at least one of the "big threes": bagging, boosting and stacking. For regressions, rather than focusing on building one best possible regression model, building multiple regression models such as (Generalized) linear regression, random forest, KNN, NN, and SVM regression models and blending the results into one in a reasonable way seems to out-perform each individual method a lot of times. Of course, a solid understanding of each method is the key and an intuitive story can be told based on a linear regression model, but I'm wondering if this has become the state of art methodology in order to achieve the best possible results.
It is well-known, at least from the late 1960', that if you take several forecasts † and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagging, boosting and stacking are all based exactly on this idea. So yes, if your aim is purely prediction then in most cases this is the best you can do. What is problematic about this method is that it is a black-box approach that returns the result but does not help you to understand and interpret it. Obviously, it is also more computationally intensive than any other method since you have to compute few forecasts instead of single one. † This concerns about any predictions in general, but it is often described in forecasting literature. Winkler, RL. and Makridakis, S. (1983). The Combination of Forecasts. J. R. Statis. Soc. A. 146 (2), 150-157. Makridakis, S. and Winkler, R.L. (1983). Averages of Forecasts: Some Empirical Results. Management Science, 29 (9) 987-996. Clemen, R.T. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5, 559-583. Bates, J.M. and Granger, C.W. (1969). The combination of forecasts. Or, 451-468. Makridakis, S. and Hibon, M. (2000). The M3-Competition: results, conclusions and implications. International journal of forecasting, 16 (4), 451-476. Reid, D.J. (1968). Combining three estimates of gross domestic product. Economica, 431-444. Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 Competition: Results, findings, conclusion and way forward. International Journal of Forecasting.
{ "source": [ "https://stats.stackexchange.com/questions/186097", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/97798/" ] }
186,463
I have two probability density functions of normal distributions: $$f_1(x_1 \; | \; \mu_1, \sigma_1) = \frac{1}{\sigma_1\sqrt{2\pi} } \; e^{ -\frac{(x-\mu_1)^2}{2\sigma_1^2} }$$ and $$f_2(x_2 \; | \; \mu_2, \sigma_2) = \frac{1}{\sigma_2\sqrt{2\pi} } \; e^{ -\frac{(x-\mu_2)^2}{2\sigma_2^2} }$$ I'm looking for the probability density function of the separation between $x_1$ and $x_2$. I think that means I'm looking for the probability density function of $|x_1 - x_2|$. Is that correct? How do I find that?
This question can be answered as stated only by assuming the two random variables $X_1$ and $X_2$ governed by these distributions are independent. This makes their difference $X = X_2-X_1$ Normal with mean $\mu = \mu_2-\mu_1$ and variance $\sigma^2=\sigma_1^2 + \sigma_2^2$. (The following solution can easily be generalized to any bivariate Normal distribution of $(X_1, X_2)$.) Thus the variable $$Z = \frac{X-\mu}{\sigma} = \frac{X_2 - X_1 - (\mu_2 - \mu_1)}{\sqrt{\sigma_1^2 + \sigma_2^2}}$$ has a standard Normal distribution (that is, with zero mean and unit variance) and $$X = \sigma \left(Z + \frac{\mu}{\sigma}\right).$$ The expression $$|X_2 - X_1| = |X| = \sqrt{X^2} = \sigma\sqrt{\left(Z + \frac{\mu}{\sigma}\right)^2}$$ exhibits the absolute difference as a scaled version of the square root of a Non-central chi-squared distribution with one degree of freedom and noncentrality parameter $\lambda=(\mu/\sigma)^2$. A Non-central chi-squared distribution with these parameters has probability element $$f(y)dy = \frac{\sqrt{y}}{\sqrt{2 \pi } } e^{\frac{1}{2} (-\lambda -y)} \cosh \left(\sqrt{\lambda y} \right) \frac{dy}{y},\ y \gt 0.$$ Writing $y=x^2$ for $x \gt 0$ establishes a one-to-one correspondence between $y$ and its square root, resulting in $$f(y)dy = f(x^2) d(x^2) = \frac{\sqrt{x^2}}{\sqrt{2 \pi } } e^{\frac{1}{2} (-\lambda -x^2)} \cosh \left(\sqrt{\lambda x^2} \right) \frac{dx^2}{x^2}.$$ Simplifying this and then rescaling by $\sigma$ gives the desired density, $$f_{|X|}(x) = \frac{1}{\sigma}\sqrt{\frac{2}{\pi}} \cosh\left(\frac{x\mu}{\sigma^2}\right) \exp\left(-\frac{x^2 + \mu^2}{2 \sigma^2}\right).$$ This result is supported by simulations, such as this histogram of 100,000 independent draws of $|X|=|X_2-X_1|$ (called "x" in the code) with parameters $\mu_1=-1, \mu_2=5, \sigma_1=4, \sigma_2=1$. On it is plotted the graph of $f_{|X|}$, which neatly coincides with the histogram values. The R code for this simulation follows. # # Specify parameters # mu <- c(-1, 5) sigma <- c(4, 1) # # Simulate data # n.sim <- 1e5 set.seed(17) x.sim <- matrix(rnorm(n.sim*2, mu, sigma), nrow=2) x <- abs(x.sim[2, ] - x.sim[1, ]) # # Display the results # hist(x, freq=FALSE) f <- function(x, mu, sigma) { sqrt(2 / pi) / sigma * cosh(x * mu / sigma^2) * exp(-(x^2 + mu^2)/(2*sigma^2)) } curve(f(x, abs(diff(mu)), sqrt(sum(sigma^2))), lwd=2, col="Red", add=TRUE)
{ "source": [ "https://stats.stackexchange.com/questions/186463", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/3250/" ] }
186,496
As far as my aggregate (and scarce) knowledge on statistics permits, I understood that if $X_1, X_2,..., X_n$ are i.i.d. random variables, then as the term implies they are independent and identically distributed. My concern here is the former property of i.i.d. samples, which reads: $$p(X_{n}|X_{i_1},X_{i_2},...,X_{i_k}) = p(X_{n}),$$ for any collection of distinct $i_j$'s s.t. $1 \leq i_j < n$. However one knows that the aggregate of independent samples of identical distributions provide information about the distribution structure, and as a result about $X_n$ in the above case, so indeed it should not be the case that: $$p(X_{n}|X_{i_1},X_{i_2},...,X_{i_k}) = p(X_{n}).$$ I know that I am victim of fallacy but I don't know why. Please help me out on this one.
I think you are confusing an estimated model of a distribution with a random variable . Let's rewrite the independence assumption as follows: $$ P(X_n | \theta, X_{i_1}, X_{i_2}, \dots, X_{i_k}) = P(X_n | \theta) \tag{1} $$ which says that if you know the underlying distribution of $X_n$ (and, for example, can identify it by a set of parameters $\theta$) then the distribution does not change given that you have observed a few samples from it. For example, think of $X_n$ as the random variable representing the outcome of the $n$-th toss of a coin. Knowing the probability of head and tail for the coin (which, btw, assume is encoded in $\theta$) is enough to know the distribution of $X_n$. In particular, the outcome of the previous tosses does not change the probability of head or tail for the $n$-th toss, and $(1)$ holds. Note, however, that $P(\theta | X_n) \neq P(\theta | X_{i_1}, X_{i_2}, \dots, X_{i_k})$.
{ "source": [ "https://stats.stackexchange.com/questions/186496", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22314/" ] }
186,966
While learning about Gradient Boosting, I haven't heard about any constraints regarding the properties of a "weak classifier" that the method uses to build and ensemble model. However, I could not imagine an application of a GB that uses linear regression, and in fact when I've performed some tests - it doesn't work. I was testing the most standard approach with a gradient of sum of squared residuals and adding the subsequent models together. The obvious problem is that the residuals from the first model are populated in such manner that there is really no regression line to fit anymore. My another observation is that a sum of subsequent linear regression models can be represented as a single regression model as well (adding all intercepts and corresponding coefficients) so I cannot imagine how that could ever improve the model. The last observation is that a linear regression (the most typical approach) is using sum of squared residuals as a loss function - the same one that GB is using. I also thought about lowering the learning rate or using only a subset of predictors for each iteration, but that could still be summed up to a single model representation eventually, so I guess it would bring no improvement. What am I missing here? Is linear regression somehow inappropriate to use with Gradient Boosting? Is it because the linear regression uses the sum of squared residuals as a loss function? Are there any particular constraints on the weak predictors so they can be applied to Gradient Boosting?
What am I missing here? I don't think you're really missing anything! Another observation is that a sum of subsequent linear regression models can be represented as a single regression model as well (adding all intercepts and corresponding coefficients) so I cannot imagine how that could ever improve the model. The last observation is that a linear regression (the most typical approach) is using sum of squared residuals as a loss function - the same one that GB is using. Seems to me that you nailed it right there, and gave a short sketch of a proof that linear regression just beats boosting linear regressions in this setting. To be pedantic, both methods are attempting to solve the following optimization problem $$ \hat \beta = \text{argmin}_\beta (y - X \beta)^t (y - X \beta) $$ Linear regression just observes that you can solve it directly, by finding the solution to the linear equation $$ X^t X \beta = X^t y $$ This automatically gives you the best possible value of $\beta$ out of all possibilities. Boosting, whether your weak classifier is a one variable or multi variable regression, gives you a sequence of coefficient vectors $\beta_1, \beta_2, \ldots$. The final model prediction is, as you observe, a sum, and has the same functional form as the full linear regressor $$ X \beta_1 + X \beta_2 + \cdots + X \beta_n = X (\beta_1 + \beta_2 + \cdots + \beta_n) $$ Each of these steps is chosen to further decrease the sum of squared errors. But we could have found the minimum possible sum of square errors within this functional form by just performing a full linear regression to begin with. A possible defense of boosting in this situation could be the implicit regularization it provides. Possibly (I haven't played with this) you could use the early stopping feature of a gradient booster, along with a cross validation, to stop short of the full linear regression. This would provide a regularization to your regression, and possibly help with overfitting. This is not particularly practical, as one has very efficient and well understood options like ridge regression and the elastic net in this setting. Boosting shines when there is no terse functional form around. Boosting decision trees lets the functional form of the regressor/classifier evolve slowly to fit the data, often resulting in complex shapes one could not have dreamed up by hand and eye. When a simple functional form is desired, boosting is not going to help you find it (or at least is probably a rather inefficient way to find it).
{ "source": [ "https://stats.stackexchange.com/questions/186966", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67868/" ] }
187,080
So, we had Secret Santa at work. We're 8 people. We each took turns and pulled a small piece of paper from a bowl with a name on it. The only rule : If you pull your name, you have to put the piece of paper back in the bowl and try again. Let's call the people A, B, C, D, E, F, G, H, which is also the order in which they picked their piece of paper. We did the gift exchange last night. A was F's secret santa. B was E's secret santa. C was D's secret santa. D was C's secret santa. E was B's secret santa. F was A's secret santa. G was H's secret santa. H was G's secret santa. See what happened? We made couples. A and F were each other's secret santa. B and E were each other's secret santa. C and D were each other's secret santa. G and H were each other's secret santa. What is the probability of this happening and how do you calculate it?
The total number of assignments among $2n$ people, where nobody is assigned to themselves, is $$d(2n) = (2n)!(1/2 - 1/6 + \cdots + (-1)^k/k! + \cdots + 1/(2n)!).$$ (These are called derangements .) The value is very close to $(2n)! / e$. If they correspond to perfect pairings, then they are a product of disjoint transpositions . This implies their cycle structure is of the form $$(a_{11}a_{12})(a_{21}a_{22})\cdots(a_{n1}a_{n2}).$$ The number of distinct such patterns is the order of the group of all permutations of the $2n$ names divided by the order of the stabilizer of the pattern. A stabilizing element may swap any number of the pairs and it may also permute the $n!$ pairs, whence there are $2^n n!$ stabilizing elements. Therefore there are $$p(2n) = \frac{(2n)!}{2^n n!}$$ such pairings. Since all such perfect pairings are derangements, and all derangements are equally likely, the chance equals $$\frac{p(2n)}{d(2n)} = \frac{1}{2^n n!(1 - 1/2 + 1/6 - \cdots + (-1)^k/k! + \cdots + 1/(2n)!)} \approx \frac{e}{2^n n!}.$$ For $2n=8$ people the exact answer therefore is $15/2119 \approx 0.00707881$ while the approximation is $e/(2^4\, 4!) \approx 0.00707886$: they agree to five significant figures. To check, this R simulation draws a million random permutations of eight objects, retains only those that are derangements, and counts those that are perfect pairings. It outputs its estimate, the standard error of the estimate, and a Z-score to compare it to the theoretical value. Its output is p.hat se Z 0.006981031 0.000137385 -0.711721705 The small Z-score is consistent with the theoretical value. (These results would be consistent with any theoretical value between $0.0066$ and $0.0073$.) paired <- function(x) crossprod(x[x] - 1:length(x))==0 good <- function(x) sum(x==1:length(x)) == 0 n <- 8 set.seed(17) x <- replicate(1e6, sample(1:n, n)) i.good <- apply(x, 2, good) i.paired <- apply(x, 2, paired) n.deranged <- sum(i.good) k.paired <- sum(i.good & i.paired) p.hat <- k.paired / n.deranged se <- sqrt(p.hat * (1-p.hat) / n.deranged) (c(p.hat=p.hat, se=se, Z=(p.hat - 15/2119)/se))
{ "source": [ "https://stats.stackexchange.com/questions/187080", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98398/" ] }
187,335
I found two questions here and here about this issue but there is no obvious answer or explanation yet.I enforce the same problem where the validation error is less than training error in my Convolution Neural Network. What does that mean?
It is difficult to be certain without knowing your actual methodology (e.g. cross-validation method, performance metric, data splitting method, etc.). Generally speaking though, training error will almost always underestimate your validation error. However it is possible for the validation error to be less than the training. You can think of it two ways: Your training set had many 'hard' cases to learn Your validation set had mostly 'easy' cases to predict That is why it is important that you really evaluate your model training methodology. If you don't split your data for training properly your results will lead to confusing, if not simply incorrect, conclusions. I think of model evaluation in four different categories: Underfitting – Validation and training error high Overfitting – Validation error is high, training error low Good fit – Validation error low, slightly higher than the training error Unknown fit - Validation error low, training error 'high' I say 'unknown' fit because the result is counter intuitive to how machine learning works. The essence of ML is to predict the unknown. If you are better at predicting the unknown than what you have 'learned', AFAIK the data between training and validation must be different in some way. This could mean you either need to reevaluate your data splitting method, adding more data, or possibly changing your performance metric (are you actually measuring the performance you want?). EDIT To address the OP's reference to a previous python lasagne question . This suggests that you have sufficient data to not require cross-validation and simply have your training, validation, and testing data subsets. Now, if you look at the lasagne tutorial you can see that the same behavior is seen at the top of the page. I would find it hard to believe the authors would post such results if it was strange but instead of just assuming they are correct let's look further. The section of most interest to us here is in the training loop section, just above the bottom you will see how the loss parameters are calculated. The training loss is calculated over the entire training dataset . Likewise, the validation loss is calculated over the entire validation dataset . The training set is typically at least 4 times as large as the validation (80-20). Given that the error is calculated over all samples, you could expect up to approximately 4X the loss measure of the validation set. You will notice, however, that the training loss and validation loss are approaching one another as training continues. This is intentional as if your training error begins to get lower than your validation error you would be beginning to overfit your model!!! I hope this clarifies these errors.
{ "source": [ "https://stats.stackexchange.com/questions/187335", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98551/" ] }
187,824
I'm currently trying to apply a linear model ( family = gaussian ) to an indicator of biodiversity that cannot take values lower than zero, is zero-inflated and is continuous. Values range from 0 to a little over 0.25. As a consequence, there is quite an obvious pattern in the residuals of the model that I haven't managed to get rid of: Does anyone have any ideas on how to solve this?
There are a variety of solutions to the case of zero-inflated (semi-)continuous distributions: Tobit regression : assumes that the data come from a single underlying Normal distribution, but that negative values are censored and stacked on zero (e.g. censReg package ). Here is a good book about Tobit model, see chapters 1 and 5. see this answer for other censored-Gaussian alternatives hurdle or "two-stage" model: use a binomial model to predict whether the values are 0 or >0, then use a linear model (or Gamma, or truncated Normal, or log-Normal) to model the observed non-zero values (typically you need to roll your own by running two separate models; combined versions where you fit the zero component and the non-zero component at the same time exist for count distributions such as Poisson (e.g glmmTMB , pscl ); glmmTMB will also do 'zero-inflated'/hurdle models for Beta or Gamma responses) Tweedie distributions : distributions in the exponential family that for a given range of shape parameters ( $1<p<2$ ) have a point mass at zero and a skewed positive distribution for $x>0$ (e.g. tweedie , cplm , glmmTMB packages) Or, if your data structure is simple enough, you could just use linear models and use permutation tests or some other robust approach to make sure that your inference isn't being messed up by the interesting distribution of the data. There are R packages/solutions available for most of these cases. There are other questions on SE about zero-inflated (semi)continuous data (e.g. here , here , and here ), but they don't seem to offer a clear general answer ... See also Min & Agresti, 2002, Modeling Nonnegative Data with Clumping at Zero: A Survey for an overview.
{ "source": [ "https://stats.stackexchange.com/questions/187824", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98838/" ] }
188,087
For an assignment I've been asked to provide a proof that k-means converges in a finite number of steps. This is what I've written: In the following, $C$ is a collection of all the cluster centres. Define an β€œenergy” function $$E(C)=\sum_{\mathbf{x}}\min_{i=1}^{k}\left\Vert \mathbf{x}-\mathbf{c}_{i}\right\Vert ^{2}$$ The energy function is nonnegative. We see that steps (2) and (3) of the algorithm both reduce the energy. Since the energy is bounded from below and is constantly being reduced it must converge to a local minimum. Iteration can be stopped when $E(C)$ changes at a rate below a certain threshold. Step 2 refers to the step which labels each data point by its closest cluster centre, and step 3 is the step where the centres are updated by taking a mean. This is not sufficient to prove convergence in a finite number of steps. The energy can keep getting smaller but it doesn't rule out the possibility that the centre points can jump about without changing the energy much. In other words there might be multiple energy minima and the algorithm can jump about between them, no?
First, there are at most $k^N$ ways to partition $N$ data points into $k$ clusters; each such partition can be called a "clustering". This is a large but finite number. For each iteration of the algorithm, we produce a new clustering based only on the old clustering. Notice that if the old clustering is the same as the new, then the next clustering will again be the same. If the new clustering is different from the old then the newer one has a lower cost Since the algorithm iterates a function whose domain is a finite set, the iteration must eventually enter a cycle. The cycle can not have length greater than $1$ because otherwise by (2) you would have some clustering which has a lower cost than itself which is impossible. Hence the cycle must have length exactly $1$. Hence k-means converges in a finite number of iterations.
{ "source": [ "https://stats.stackexchange.com/questions/188087", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86522/" ] }
188,903
I have learned about the intuition behind the KL Divergence as how much a model distribution function differs from the theoretical/true distribution of the data. The source I am reading goes on to say that the intuitive understanding of 'distance' between these two distributions is helpful, but should not be taken literally because for two distributions $P$ and $Q$ , the KL Divergence is not symmetric in $P$ and $Q$ . I am not sure how to understand the last statement, or is this where the intuition of 'distance' breaks down? I would appreciate a simple, but insightful example.
A (metric) distance $D$ must be symmetric, i.e. $D(P,Q) = D(Q,P)$. But, from definition, $KL$ is not. Example: $\Omega = \{A,B\}$, $P(A) = 0.2, P(B) = 0.8$, $Q(A) = Q(B) = 0.5$. We have: $$KL(P,Q) = P(A)\log \frac{P(A)}{Q(A)} + P(B) \log \frac{P(B)}{Q(B)} \approx 0.19$$ and $$KL(Q,P) = Q(A)\log \frac{Q(A)}{P(A)} + Q(B) \log \frac{Q(B)}{P(B)} \approx 0.22$$ thus $KL(P,Q) \neq KL(Q,P)$ and therefore $KL$ is not a (metric) distance.
{ "source": [ "https://stats.stackexchange.com/questions/188903", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67413/" ] }
189,067
While studying Reinforcement Learning, I have come across many forms of the reward function: $R(s,a)$ , $R(s,a,s')$ , and even a reward function that only depends on the current state. Having said that, I realized it is not very easy to 'make' or 'define' a reward function. Here are my questions: Are there rules on how to make reward functions? Are there other forms of the reward function? For example, a polynomial form perhaps that depends on the state?
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might represent the taste of food. Or perhaps, $(s,a)$ might represent the act of tasting the food. So, to the extent that the reward function determines what the agent's motivations are, yes, you have to make it up! There are no absolute restrictions, but if your reward function is "better behaved", the the agent will learn better. Practically, this means speed of convergence, and not getting stuck in local minima. But further specifications will depend strongly on the species of reinforcement learning you are using. For example, is the state/action space continuous or discrete? Is the world or the action selection stochastic? Is reward continuously harvested, or only at the end? One way to view the problem is that the reward function determines the hardness of the problem. For example, traditionally, we might specify a single state to be rewarded: $$ R(s_1)=1 $$ $$ R(s_{2..n})=0 $$ In this case, the problem to be solved is quite a hard one, compared to, say, $R(s_i)=1/i^2$ , where there is a reward gradient over states. For hard problems, specifying more detail, e.g. $R(s,a)$ or $R(s,a,s^\prime)$ can help some algorithms by providing extra clues, but potentially at the expense of requiring more exploration. You might well need to include costs as negative terms in $R$ (e.g. energetic costs ), to make the problem well-specified. For the case of a continuous state space, if you want an agent to learn easily, the reward function should be continuous and differentiable . So polynomials can work well for many algorithms. Further, try to remove localised minima. There are a number of examples of how NOT to make a reward function -- like the Rastrigin function . Having said this, several RL algorithms (e.g. Boltzmann machines ) are somewhat robust to these. If you are using RL to solve a real-world problem, you will probably find that although finding the reward function is the hardest part of the problem, it is intimately tied up with how you specify the state space . For example, in a time-dependent problem, the distance to the goal often makes a poor reward function (e.g. in the mountain car problem ). Such situations can be solved by using higher dimensional state spaces (hidden states or memory traces), or by hierarchical RL. At an abstract level, unsupervised learning was supposed to obviate stipulating "right and wrong" performance. But we can see now that RL simply shifts the responsibility from the teacher/critic to the reward function. There is a less circular way to solve the problem: that is, to infer the best reward function . One method is called inverse RL or "apprenticeship learning" , which generates a reward function that would reproduce observed behaviours. Finding the best reward function to reproduce a set of observations can also be implemented by MLE, Bayesian, or information theoretic methods - if you google for "inverse reinforcement learning".
{ "source": [ "https://stats.stackexchange.com/questions/189067", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67413/" ] }
189,568
This is a question regarding a practice or method followed by some of my colleagues. While making a logistic regression model, I have seen people replace categorical variables (or continuous variables which are binned) with their respective Weight of Evidence (WoE). This is supposedly done to establish a monotonic relation between the regressor and dependent variable. Now as far as I understand, once the model is made, the variables in the equation are NOT the variables in the dataset. Rather, the variables in the equation are now kind of the importance or weight of the variables in segregating the dependent variable ! My question is : how do we now interpret the model or the model coefficients? For example for the following equation : $$ \log\bigg(\frac{p}{1-p}\bigg) = \beta_0 + \beta_1x_1 $$ we can say that $\exp(\beta_1)$ is the relative increase in odd's ratio for 1 unit increase in the variable $x_1$. But if the variable is replaced by its WoE, then the interpretation will be changed to : relative increase in odd's ratio for 1 unit increase in the IMPORTANCE / WEIGHT of the variable I have seen this practice in internet, but nowhere I found answer of this question. This link from this community itself is related to somewhat similar query where someone wrote: WoE displays a linear relationship with the natural logarithm of the odds ratio which is the dependent variable in logistic regression. Therefore, the question of model misspecification does not arise in logistic regression when we use WoE instead of the actual values of the variable. But I still don't get the explanation. Please help me understand what I am missing.
The WoE method consists of two steps: to split (a continuous) variable into few categories or to group (a discrete) variable into few categories (and in both cases you assume that all observations in one category have "same" effect on dependent variable) to calculate WoE value for each category (then the original x values are replaced by the WoE values) The WoE transformation has (at least) three positive effects: It can transform an independent variable so that it establishes monotonic relationship to the dependent variable. Actually it does more than this - to secure monotonic relationship it would be enough to "recode" it to any ordered measure (for example 1,2,3,4...) but the WoE transformation actually orders the categories on a "logistic" scale which is natural for logistic regression For variables with too many (sparsely populated) discrete values, these can be grouped into categories (densely populated) and the WoE can be used to express information for the whole category The (univariate) effect of each category on dependent variable can be simply compared across categories and across variables because WoE is standardized value (for example you can compare WoE of married people to WoE of manual workers) It also has (at least) three drawbacks: Loss of information (variation) due to binning to few categories It is a "univariate" measure so it does not take into account correlation between independent variables It is easy to manipulate (overfit) the effect of variables according to how categories are created Conventionally, the betas of the regression (where the x has been replaced by WoE) are not interpreted per se but they are multiplied with WoE to obtain a "score" (for example beta for variable "marital status" can be multiplied with WoE of "married people" group to see the score of married people; beta for variable "occupation" can be multiplied by WoE of "manual workers" to see the score of manual workers. then if you are interested in the score of married manual workers, you sum up these two score and see how much is the effect on outcome). The higher the score is, the greater is probability of an outcome equal to 1.
{ "source": [ "https://stats.stackexchange.com/questions/189568", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36224/" ] }
189,652
My understanding is that when some features have different ranges in their values (for example, imagine one feature being the age of a person and another one being their salary in USD) will affect negatively algorithms because the feature with bigger values will take more influence, is it a good practice to simply ALWAYS scale/normalize the data? It looks to me that if the values are already similar among then, then normalizing them will have little effect, but if the values are very different normalization will help, however it feels too simple to be true :) Am I missing something? Are there situations/algorithms were actually it is desirable to let some features to deliberately outweigh others?
First things first, I don't think there are many questions of the form "Is it a good practice to always X in machine learning" where the answer is going to be definitive. Always? Always always? Across parametric, non-parametric, Bayesian, Monte Carlo, social science, purely mathematic, and million feature models? That'd be nice, wouldn't it! Concretely though, here are a few ways in which: it just depends. Some times when normalizing is good: 1) Several algorithms, in particular SVMs come to mind, can sometimes converge far faster on normalized data (although why, precisely, I can't recall). 2) When your model is sensitive to magnitude, and the units of two different features are different, and arbitrary. This is like the case you suggest, in which something gets more influence than it should. But of course -- not all algorithms are sensitive to magnitude in the way you suggest. Linear regression coefficients will be identical if you do, or don't, scale your data, because it's looking at proportional relationships between them. Some times when normalizing is bad: 1) When you want to interpret your coefficients, and they don't normalize well. Regression on something like dollars gives you a meaningful outcome. Regression on proportion-of-maximum-dollars-in-sample might not. 2) When, in fact, the units on your features are meaningful, and distance does make a difference! Back to SVMs -- if you're trying to find a max-margin classifier, then the units that go into that 'max' matter. Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you're going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!) In I'm sure unsatisfying summary, the most general answer is that you need to ask yourself seriously what makes sense with the data, and model, you're using.
{ "source": [ "https://stats.stackexchange.com/questions/189652", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91492/" ] }
189,822
What difference does centering (or de-meaning) your data make for PCA? I've heard that it makes the maths easier or that it prevents the first PC from being dominated by the variables' means, but I feel like I haven't been able to firmly grasp the concept yet. For example, the top answer here How does centering the data get rid of the intercept in regression and PCA? describes how not centering would pull the first PCA through the origin, rather than the main axis of the point cloud. Based on my understanding of how the PC's are obtained from the covariance matrix's eigenvectors, I can't understand why this would happen. Moreover, my own calculations with and without centering seem to make little sense. Consider the setosa flowers in the iris dataset in R. I calculated the eigenvectors and eigenvalues of the sample covariance matrix as follows. data(iris) df <- iris[iris$Species=='setosa',1:4] e <- eigen(cov(df)) > e $values [1] 0.236455690 0.036918732 0.026796399 0.009033261 $vectors [,1] [,2] [,3] [,4] [1,] -0.66907840 0.5978840 0.4399628 -0.03607712 [2,] -0.73414783 -0.6206734 -0.2746075 -0.01955027 [3,] -0.09654390 0.4900556 -0.8324495 -0.23990129 [4,] -0.06356359 0.1309379 -0.1950675 0.96992969 If I center the dataset first, I get exactly the same results. This seems quite obvious, since centering does not change the covariance matrix at all. df.centered <- scale(df,scale=F,center=T) e.centered<- eigen(cov(df.centered)) e.centered The prcomp function results in exactly this eigenvalue-eigenvector combination as well, for both the centered and uncentered dataset. p<-prcomp(df) p.centered <- prcomp(df.centered) Standard deviations: [1] 0.48626710 0.19214248 0.16369606 0.09504347 Rotation: PC1 PC2 PC3 PC4 Sepal.Length -0.66907840 0.5978840 0.4399628 -0.03607712 Sepal.Width -0.73414783 -0.6206734 -0.2746075 -0.01955027 Petal.Length -0.09654390 0.4900556 -0.8324495 -0.23990129 Petal.Width -0.06356359 0.1309379 -0.1950675 0.96992969 However, the prcomp function has the default option center = TRUE . Disabling this option results in the following PC's for the uncentered data ( p.centered remains the same when center is set to false): p.uncentered <- prcomp(df,center=F) > p.uncentered Standard deviations: [1] 6.32674700 0.22455945 0.16369617 0.09766703 Rotation: PC1 PC2 PC3 PC4 Sepal.Length -0.8010073 0.40303704 0.4410167 0.03811461 Sepal.Width -0.5498408 -0.78739486 -0.2753323 -0.04331888 Petal.Length -0.2334487 0.46456598 -0.8317440 -0.19463332 Petal.Width -0.0395488 0.04182015 -0.1946750 0.97917752 Why is this different from my own eigenvector calculations on the covariance matrix of the uncentered data? Does it have to do with the calculation? I've seen mentioned that prcomp uses something called the SVD method rather than the eigenvalue decomposition to calculate the PC's. The function princomp uses the latter, but its results are identical to prcomp . Does my issue relate to the answer I described at the top of this post? EDIT: Issue was cleared up by the helpful @ttnphns. See his comment below, on this question: What does it mean to compute eigenvectors of a covariance matrix if the data were not centered first? and in this answer: https://stats.stackexchange.com/a/22520/3277 . In short: a covariance matrix implicitly involves centering of the data already. PCA uses either SVD or eigendecomposition of the centered data $\bf X$, and the covariance matrix is then equal to ${\bf X'X}/(n-1)$.
As you remarked yourself and as explained by @ttnphns in the comments, computing covariance matrix implicitly performs centering: variance, by definition, is the average squared deviation from the mean . Centered and non-centered data will have identical covariance matrices. So if by PCA we understand the following procedure: $$\mathrm{Data}\to\text{Covariance matrix}\to\text{Eigen-decomposition},$$ then centering does not make any difference. [Wikipedia:] To find the axes of the ellipse, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data... And so you are right to observe that this is not a very accurate formulation. When people talk about "PCA on non-centered data", they mean that instead of covariance matrix the eigen-decomposition is performed on the $\mathbf X^\top \mathbf X/(n-1)$ matrix. If $\mathbf X$ is centered then this will be exactly the covariance matrix. If not then not. So if by PCA we understand the following procedure: $$\text{Data } \mathbf X\to\text{Matrix } \mathbf X^\top \mathbf X/(n-1)\to\text{Eigen-decomposition},$$ then centering matters a lot and has the effect described and illustrated by @ttnphns in How does centering the data get rid of the intercept in regression and PCA? It might seem weird to even mention this "strange" procedure, however consider that PCA can be very conveniently performed via singular value decomposition (SVD) of the data matrix $\mathbf X$ itself. I describe this in detail here: Relationship between SVD and PCA. How to use SVD to perform PCA? In this case the procedure is as follows: $$\text{Data } \mathbf X \to \text{Singular value decomposition}.$$ If $\mathbf X$ is centered then this is equivalent to standard PCA done via covariance matrix. But if not, then it's equivalent to the "non-centered" PCA as described above. Since SVD is a very common and very convenient way to perform PCA, in practice it can be quite important to remember to center the data before calling svd function. I certainly had my share of bugs because of forgetting to do it.
{ "source": [ "https://stats.stackexchange.com/questions/189822", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/62518/" ] }
190,148
Hinton and Salakhutdinov in Reducing the Dimensionality of Data with Neural Networks, Science 2006 proposed a non-linear PCA through the use of a deep autoencoder. I have tried to build and train a PCA autoencoder with Tensorflow several times but I have never been able to obtain better result than linear PCA. How can I efficiently train an autoencoder? (Later edit by @amoeba: the original version of this question contained Python Tensorflow code that did not work correctly. One can find it in the edit history.)
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov: It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the original 784 dimensions to two. Let's try to reproduce it. I will not be using Tensorflow directly, because it's much easier to use Keras (a higher-level library running on top of Tensorflow) for simple deep learning tasks like this. H&S used $$784\to 1000\to 500\to 250\to 2\to 250\to 500\to 1000\to 784$$ architecture with logistic units, pre-trained with the stack of Restricted Boltzmann Machines. Ten years later, this sounds very old-school. I will use a simpler $$784\to 512\to 128\to 2\to 128\to 512\to 784$$ architecture with exponential linear units without any pre-training. I will use Adam optimizer (a particular implementation of adaptive stochastic gradient descent with momentum). The code is copy-pasted from a Jupyter notebook. In Python 3.6 you need to install matplotlib (for pylab), NumPy, seaborn, TensorFlow and Keras. When running in Python shell, you may need to add plt.show() to show the plots. Initialization %matplotlib notebook import pylab as plt import numpy as np import seaborn as sns; sns.set() import keras from keras.datasets import mnist from keras.models import Sequential, Model from keras.layers import Dense from keras.optimizers import Adam (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) / 255 x_test = x_test.reshape(10000, 784) / 255 PCA mu = x_train.mean(axis=0) U,s,V = np.linalg.svd(x_train - mu, full_matrices=False) Zpca = np.dot(x_train - mu, V.transpose()) Rpca = np.dot(Zpca[:,:2], V[:2,:]) + mu # reconstruction err = np.sum((x_train-Rpca)**2)/Rpca.shape[0]/Rpca.shape[1] print('PCA reconstruction error with 2 PCs: ' + str(round(err,3))); This outputs: PCA reconstruction error with 2 PCs: 0.056 Training the autoencoder m = Sequential() m.add(Dense(512, activation='elu', input_shape=(784,))) m.add(Dense(128, activation='elu')) m.add(Dense(2, activation='linear', name="bottleneck")) m.add(Dense(128, activation='elu')) m.add(Dense(512, activation='elu')) m.add(Dense(784, activation='sigmoid')) m.compile(loss='mean_squared_error', optimizer = Adam()) history = m.fit(x_train, x_train, batch_size=128, epochs=5, verbose=1, validation_data=(x_test, x_test)) encoder = Model(m.input, m.get_layer('bottleneck').output) Zenc = encoder.predict(x_train) # bottleneck representation Renc = m.predict(x_train) # reconstruction This takes ~35 sec on my work desktop and outputs: Train on 60000 samples, validate on 10000 samples Epoch 1/5 60000/60000 [==============================] - 7s - loss: 0.0577 - val_loss: 0.0482 Epoch 2/5 60000/60000 [==============================] - 7s - loss: 0.0464 - val_loss: 0.0448 Epoch 3/5 60000/60000 [==============================] - 7s - loss: 0.0438 - val_loss: 0.0430 Epoch 4/5 60000/60000 [==============================] - 7s - loss: 0.0423 - val_loss: 0.0416 Epoch 5/5 60000/60000 [==============================] - 7s - loss: 0.0412 - val_loss: 0.0407 so you can already see that we surpassed PCA loss after only two training epochs. (By the way, it is instructive to change all activation functions to activation='linear' and to observe how the loss converges precisely to the PCA loss. That is because linear autoencoder is equivalent to PCA.) Plotting PCA projection side-by-side with the bottleneck representation plt.figure(figsize=(8,4)) plt.subplot(121) plt.title('PCA') plt.scatter(Zpca[:5000,0], Zpca[:5000,1], c=y_train[:5000], s=8, cmap='tab10') plt.gca().get_xaxis().set_ticklabels([]) plt.gca().get_yaxis().set_ticklabels([]) plt.subplot(122) plt.title('Autoencoder') plt.scatter(Zenc[:5000,0], Zenc[:5000,1], c=y_train[:5000], s=8, cmap='tab10') plt.gca().get_xaxis().set_ticklabels([]) plt.gca().get_yaxis().set_ticklabels([]) plt.tight_layout() Reconstructions And now let's look at the reconstructions (first row - original images, second row - PCA, third row - autoencoder): plt.figure(figsize=(9,3)) toPlot = (x_train, Rpca, Renc) for i in range(10): for j in range(3): ax = plt.subplot(3, 10, 10*j+i+1) plt.imshow(toPlot[j][i,:].reshape(28,28), interpolation="nearest", vmin=0, vmax=1) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.tight_layout() One can obtain much better results with deeper network, some regularization, and longer training. Experiment. Deep learning is easy!
{ "source": [ "https://stats.stackexchange.com/questions/190148", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/25392/" ] }
190,152
I'd like to show how the values of certain variables (~ 15) change over time, but I'd also like to show how the variables differ from each other in each year. So I created this plot: But even when changing the colour scheme or adding different line/shape types this looks messy. Is there a better way to visualise this kind of data? Test data with R code: structure(list(Var = structure(c(1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 3L, 3L, 5L, 5L, 5L, 5L, 5L, 5L, 5L, 6L, 6L, 6L, 6L, 6L, 6L, 7L, 7L, 7L, 7L, 7L, 7L, 7L, 8L, 8L, 8L, 8L, 8L, 8L, 8L, 9L, 9L, 9L, 9L, 9L, 9L, 9L, 11L, 11L, 11L, 11L, 11L, 11L, 11L, 12L, 12L, 12L, 12L, 12L, 12L, 13L, 14L, 14L, 14L, 14L, 14L, 14L, 14L, 16L, 16L, 16L, 16L, 16L, 16L, 17L, 17L, 17L, 17L, 17L, 17L, 17L, 18L, 18L, 18L, 18L, 18L, 18L, 18L), .Label = c("A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X", "Y", "Z"), class = "factor"), Year = c(2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L, 1991L, 1993L, 1996L, 2000L, 2004L, 2011L, 2015L), Val = c(25.6, 22.93, 20.82, 24.1, 24.5, 29, 25.55, 24.5, 24.52, 20.73, 25.8, 25.5, 29.5, 27.7, 25.1, 25, 24.55, 26.75, 25, 30.5, 27.25, 25.1, 22.4, 27.07, 26, 29, 27.2, 24.2, 23, 24.27, 27.68, 27, 30.5, 28.1, 24.9, 23.75, 22.75, 27.25, 25, 29, 28.45, 24, 20.25, 17.07, 24.45, 25, 28.5, 26.75, 24.9, 21.25, 20.65, 25.1, 24.5, 26.5, 25.35, 23.5, 21.93, 26.5, 24.5, 29, 29.1, 26.4, 28.1, 23.75, 26.5, 28.05, 27, 30.5, 25.65, 23.3, 23.25, 24.57, 26.07, 27.5, 28.85, 27.7, 22, 23.43, 26.88, 27, 30.5, 29.25, 28.1, 23, 23.8, 28.32, 27, 29.5, 29.15, 27.6)), row.names = c(1L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L, 35L, 36L, 37L, 38L, 39L, 40L, 41L, 44L, 45L, 46L, 47L, 48L, 49L, 50L, 53L, 54L, 55L, 56L, 57L, 58L, 59L, 62L, 63L, 64L, 65L, 66L, 67L, 68L, 69L, 70L, 71L, 72L, 73L, 74L, 75L, 78L, 79L, 80L, 81L, 82L, 83L, 84L, 87L, 88L, 89L, 90L, 91L, 92L, 95L, 96L, 97L, 98L, 99L, 100L, 101L, 104L, 105L, 106L, 107L, 108L, 109L, 110L), na.action = structure(c(2L, 3L, 11L, 12L, 33L, 34L, 42L, 43L, 51L, 52L, 60L, 61L, 76L, 77L, 85L, 86L, 93L, 94L, 102L, 103L), .Names = c("2", "3", "11", "12", "33", "34", "42", "43", "51", "52", "60", "61", "76", "77", "85", "86", "93", "94", "102", "103"), class = "omit"), class = "data.frame", .Names = c("Var", "Year", "Val"))
Fortuitously or otherwise, your example is of optimal size (up to 7 values for each of 15 groups) first, to show that there is a problem graphically; and second, to allow other and fairly simple solutions. The graph is of a kind often called spaghetti by people in different fields, although it's not always clear whether that term is meant as affectionate or abusive. (The term spaghetti was used by Gene Zelazny in 1985, but may well be much older yet.) The graph does show the collective or family behaviour of all the groups, but it is fairly hopeless at showing the detail to be explored. One standard alternative is just to show the separate groups in separate panels, but that in turn can make precise group-to-group comparisons difficult; each group is separated from its context of the other groups. So why not combine both ideas: a separate panel for each group, but also show the other groups as backdrop? This hinges crucially on highlighting the group which is in focus and on downplaying the others, which is easy enough in this example given some use of line colour, thickness etc. In other examples, marker or point symbol choices might be natural instead. In this case, details of possible practical or scientific importance or interest are highlighted: We only have one value for A and M. We don't have all values for all given years in all other cases. Some groups plot high, some low, and so forth. I won't attempt an interpretation here: the data are anonymous, but that is the researcher's concern in any case. Depending on what is easy or possible in your software, there is scope for changing small details here, such as whether axis labels and titles are repeated (there are simple arguments both for and against). The larger issue is how far this strategy will work more generally. The number of groups is the major driver, more so than the number of points in each group. Roughly speaking, the approach might work up to about 25 groups (a 5 x 5 display, say): with more groups, not only do the graphs become smaller and more difficult to read, but even the researcher loses the inclination to scan all the panels. If there were hundreds (thousands, ...) of groups, it would usually be essential to select a small number of groups to show. Some mix of criteria such as selecting some "typical" and some "extreme" panels would be needed; that should be driven by project goals and some idea of what makes sense for each dataset. Another approach that can be efficient is to emphasize a small number of series in each panel. So, if there were 25 broad groups, each broad group could be shown with all others as backdrop. Alternatively, there could be some averaging or other summarization. Using (e.g.) principal or independent components might also be a good idea. Although the example calls for line plots, the principle is naturally much general. Examples could be multiplied, scatter plots, model diagnostic plots, etc. Similar but not identical ideas Cleveland (1985, pp.74, 203, 205, 268) shows graphs in which summary curves for groups are repeated with data shown separately for each group. (Note: these graphs do not appear in Cleveland 1994.) Wallgren et al. (1996, pp.47, 69) use the same idea. Cleveland, W.S. 1985. Elements of Graphing Data. Monterey, CA: Wadsworth. Cleveland, W.S. 1994. Elements of Graphing Data. Summit, NJ: Hobart Press. Wallgren, A., B. Wallgren, R. Persson, U. Jorner, and J.-A. Haaland. 1996. Graphing Statistics and Data: Creating Better Charts. Newbury Park, CA: SAGE. Zelazny (1985 and later editions) has a different twist: Zelazny, G. 1985. Say It With Charts: The Executive's Guide to Successful Presentations. Homewood, IL: Dow Jones-Irwin. See p.39 for a graph with four panels: series A compared in turn with series B, C, D, E. See also p.111. Same pages in 4th edition: Zelazny, G. 2001. Say It With Charts: The Executive's Guide to Visual Communication. New York: McGraw-Hill. See p.39 for a graph with four panels: series A compared in turn with series B, C, D, E. See also p.111. Direct examples [others are most welcome] Koenker, R. 2005. Quantile Regression. Cambridge: Cambridge University Press. See pp.12-13. Carr, D.B. and Pickle, L.W. 2010. Visualizing Data Patterns with Micromaps. Boca Raton, FL: CRC Press. See p.85. Cox, N.J. 2010. Graphing subsets. Stata Journal 10: 670-681. Yau, N. 2013. Data Points: Visualization That Means Something. Indianapolis, IN: John Wiley. See p.224. Rougier, N.P., Droettboom, M. and Bourne, P.E. 2014. Ten simple rules for better figures. PLOS Computational Biology 10(9): e1003833. doi:10.1371/journal.pcbi.1003833 link here Schwabish, J.A. 2014. An economist's guide to visualizing data. Journal of Economic Perspectives 28: 209-234. Knaflic, C.N. 2015. Storytelling with Data: A Data Visualization Guide for Business Professionals. Hoboken, NJ: Wiley. See p.233. Unwin, A. 2015. Graphical Data Analysis with R. Boca Raton, FL: CRC Press. See pp.121, 217. Berinato, S. 2016. Good Charts: The HBR Guide to Making Smarter, More Persuasive Data Visualizations. Boston, MA: Harvard Business Review Press. See p.74. Cairo, A. 2016. The Truthful Art: Data, Charts, and Maps for Communication. San Francisco, CA: New Riders. See p.211 CamΓ΅es, J. 2016. Data at Work: Best Practices for Creating Effective Charts and Information Graphics in Microsoft Excel . San Francisco, CA: New Riders. See p.354. Standage, T. 2016. Go Figure: The Economist Explains: Things You Didn't Know You Didn't Know. London: Profile Books. See p.177. Wickham, H. 2016. ggplot2: Elegant Graphics for Data Analysis. Cham: Springer. See p.157. Schwabish, J. 2017. Better Presentations: A Guide for Scholars, Researchers, and Wonks. New York: Columbia University Press. See p.98. Kriebel, A. and Murray, E. 2018. #MakeoverMonday: Improving How We Visualize and Analyze Data, One Chart at a Time. Hoboken, NJ: John Wiley. See p.303. Grant, R. 2019. Data Visualization: Charts, Maps, and Interactive Graphics. Boca Raton, FL: CRC Press. See p.52. Koponen, J. and HildΓ©n, J. 2019. The Data Visualization Handbook. Espoo: Aalto ARTS Books. See p.101. Tufte, E.R. 2020. Seeing with Fresh Eyes: Meaning, Space, Data, Truth. Cheshire, CT: Graphics Press. See p.26 [original work by John Burn-Murdoch, but showing 0 on logarithmic scale (!) and alphabetical order by countries, which could be tuned] Note: The graph was created in Stata. subsetplot must be installed first with ssc inst subsetplot . Data were copied and pasted from R and value labels were defined to show years as 90 95 00 05 10 15 . The main command is subsetplot connected Val Year, by(Var) c(L) lcolor(gs12) backdrop(line) xtitle("") combine(imargin(small)) subset(lcolor(blue) mcolor(blue))
{ "source": [ "https://stats.stackexchange.com/questions/190152", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
190,763
I have fish density data that I am trying to compare between several different collection techniques, the data has lots of zeros, and the histogram looks vaugley appropriate for a poisson distribution except that, as densities, it is not integer data. I am relatively new to GLMs and have spent the last several days looking online for how to tell which distribution to use but have failed utterly in finding any resources that help make this decision. A sample histogram of the data looks like the following: I have no idea how to go about deciding on the appropriate family to use for the GLM. If anyone has any advice or could give me a resource I should check out, that would be fantastic.
Generalized linear model is defined in terms of linear predictor $$ \eta = \boldsymbol{X} \beta $$ that is passed through the link function $g$: $$ g(E(Y\,|\,\boldsymbol{X})) = \eta $$ It models the relation between the dependent variable $Y$ and independent variables $\boldsymbol{X} = X_1,X_2,\dots,X_k$. More precisely, it models a conditional expectation of $Y$ given $\boldsymbol{X}$, $$ E(Y\,|\,\boldsymbol{X} ) = \mu = g^{-1}(\eta) $$ so the model can be defined in probabilistic terms as $$ Y\,|\,\boldsymbol{X} \sim f(\mu, \sigma^2) $$ where $f$ is a probability distribution of the exponential family . So first thing to notice is that $f$ is not the distribution of $Y$, but $Y$ follows it conditionally on $\boldsymbol{X}$. The choice of this distribution depends on your knowledge (what you can assume) about the relation between $Y$ and $\boldsymbol{X}$. So anywhere you read about the distribution, what is meant is the conditional distribution. If your outcome is continuous and unbounded, then the most "default" choice is the Gaussian distribution (a.k.a. normal distribution ), i.e. the standard linear regression (unless you use other link function then the default identity link). If you are dealing with continuous non-negative outcome, then you could consider the Gamma distribution , or Inverse Gaussian distribution . If your outcome is discrete , or more precisely, you are dealing with counts (how many times something happen in given time interval), then the most common choice of the distribution to start with is Poisson distribution . The problem with Poisson distribution is that it is rather inflexible in the fact that it assumes that mean is equal to variance, if this assumption is not met, you may consider using quasi-Poisson family, or negative binomial distribution (see also Definition of dispersion parameter for quasipoisson family ). If your outcome is binary (zeros and ones), proportions of "successes" and "failures" (values between 0 and 1), or their counts , you can use Binomial distribution , i.e. the logistic regression model. If there is more then two categories, you would use multinomial distribution in multinomial regression . On another hand, in practice, if you are interested in building a predictive model, you may be interested in testing few different distributions, and in the end learn that one of them gives you more accurate results then the others even if it is not the most "appropriate" in terms of theoretical considerations (e.g. in theory you should use Poisson, but in practice standard linear regression works best for your data).
{ "source": [ "https://stats.stackexchange.com/questions/190763", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/100505/" ] }
190,993
In ridge regression, the objective function to be minimized is: $$\text{RSS}+\lambda \sum\beta_j^2.$$ Can this be optimized using the Lagrange multiplier method? Or is it straight differentiation?
There are two formulations for the ridge problem. The first one is $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prime} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)$$ subject to $$\sum_{j} \beta_j^2 \leq s. $$ This formulation shows the size constraint on the regression coefficients. Note what this constraint implies; we are forcing the coefficients to lie in a ball around the origin with radius $\sqrt{s}$. The second formulation is exactly your problem $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prime} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right) + \lambda \sum\beta_j^2 $$ which may be viewed as the Largrange multiplier formulation. Note that here $\lambda$ is a tuning parameter and larger values of it will lead to greater shrinkage. You may proceed to differentiate the expression with respect to $\boldsymbol{\beta}$ and obtain the well-known ridge estimator $$\boldsymbol{\beta}_{R} = \left( \mathbf{X}^{\prime} \mathbf{X} + \lambda \mathbf{I} \right)^{-1} \mathbf{X}^{\prime} \mathbf{y} \tag{1}$$ The two formulations are completely equivalent , since there is a one-to-one correspondence between $s$ and $\lambda$. Let me elaborate a bit on that. Imagine that you are in the ideal orthogonal case, $\mathbf{X}^{\prime} \mathbf{X} = \mathbf{I}$. This is a highly simplified and unrealistic situation but we can investigate the estimator a little more closely so bear with me. Consider what happens to equation (1). The ridge estimator reduces to $$\boldsymbol{\beta}_R = \left( \mathbf{I} + \lambda \mathbf{I} \right)^{-1} \mathbf{X}^{\prime} \mathbf{y} = \left( \mathbf{I} + \lambda \mathbf{I} \right)^{-1} \boldsymbol{\beta}_{OLS} $$ as in the orthogonal case the OLS estimator is given by $\boldsymbol{\beta}_{OLS} = \mathbf{X}^{\prime} \mathbf{y}$. Looking at this component-wise now we obtain $$\beta_R = \frac{\beta_{OLS}}{1+\lambda} \tag{2}$$ Notice then that now the shrinkage is constant for all coefficients. This might not hold in the general case and indeed it can be shown that the shrinkages will differ widely if there are degeneracies in the $\mathbf{X}^{\prime} \mathbf{X}$ matrix. But let's return to the constrained optimization problem. By the KKT theory , a necessary condition for optimality is $$\lambda \left( \sum \beta_{R,j} ^2 -s \right) = 0$$ so either $\lambda = 0$ or $\sum \beta_{R,j} ^2 -s = 0$ (in this case we say that the constraint is binding). If $\lambda = 0$ then there is no penalty and we are back in the regular OLS situation. Suppose then that the constraint is binding and we are in the second situation. Using the formula in (2), we then have $$ s = \sum \beta_{R,j}^2 = \frac{1}{\left(1 + \lambda \right)^2} \sum \beta_{OLS,j}^2$$ whence we obtain $$\lambda = \sqrt{\frac{\sum \beta_{OLS,j} ^2}{s}} - 1 $$ the one-to-one relationship previously claimed. I expect this is harder to establish in the non-orthogonal case but the result carries regardless. Look again at (2) though and you'll see we are still missing the $\lambda$. To get an optimal value for it, you may either use cross-validation or look at the ridge trace. The latter method involves constructing a sequence of $\lambda$ in (0,1) and looking how the estimates change. You then select the $\lambda$ that stabilizes them. This method was suggested in the second of the references below by the way and is the oldest one. References Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: Biased estimation for nonorthogonal problems." Technometrics 12.1 (1970): 55-67. Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: applications to nonorthogonal problems." Technometrics 12.1 (1970): 69-82.
{ "source": [ "https://stats.stackexchange.com/questions/190993", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/86275/" ] }
191,396
The image below shows a continuous curve of false positive rates vs. true positive rates: However, what I don't immediately get is how these rates are being calculated. If a method is applied to a dataset, it has a certain FP rate and a certain FN rate. Doesn't that mean that each method should have a single point rather than a curve? Of course there's multiple ways to configure a method, producing multiple different points, but it's not clear to me how there is this continuum of rates or how it's generated.
The plot is ROC curve and the (False Positive Rate, True Positive Rate) points are calculated for different thresholds. Assuming you have an uniform utility function, the optimal threshold value is the one for the point closest to (0, 1).
{ "source": [ "https://stats.stackexchange.com/questions/191396", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/85525/" ] }
191,492
How does saddlepoint approximation work? What sort of problem is it good for? (Feel free to use a particular example or examples by way of illustration) Are there any drawbacks, difficulties, things to watch out for, or traps for the unwary?
The saddlepoint approximation to a probability density function (it works likewise for mass functions, but I will only talk here in terms of densities) is a surprisingly well working approximation, that can be seen as a refinement on the central limit theorem. So, it will only work in settings where there is a central limit theorem, but it needs stronger assumptions. We start with the assumption that the moment generating function exists and is twice differentiable. This implies in particular that all moments exist. Let $X$ be a random variable with moment generating function (mgf) $$ \DeclareMathOperator{\E}{\mathbb{E}} M(t) = \E e^{t X} $$ and cgf (cumulant generating function) $K(t)=\log M(t)$ (where $\log $ denotes the natural logarithm). In the development I will follow closely Ronald W Butler: "Saddlepoint Approximations with Applications" (CUP). We will develop the saddlepoint approximation using the Laplace approximation to a certain integral. Write $$ e^{K(t)} = \int_{-\infty}^\infty e^{t x} f(x) \; dx =\int_{-\infty}^\infty \exp(tx+\log f(x) ) \; dx \\ = \int_{-\infty}^\infty \exp(-h(t,x)) \; dx $$ where $h(t,x) = -tx - \log f(x) $ . Now we will Taylor expand $h(t,x)$ in $x$ considering $t$ as a constant. This gives $$ h(t,x)=h(t,x_0) + h'(t,x_0)(x-x_0) +\frac12 h''(t,x_0) (x-x_0)^2 +\dotsm $$ where $'$ denotes differentiation with respect to $x$ . Note that $$ h'(t,x)=-t-\frac{\partial}{\partial x}\log f(x) \\ h''(t,x)= -\frac{\partial^2}{\partial x^2} \log f(x) > 0 $$ (the last inequality by assumption as it is needed for the approximation to work). Let $x_t$ be the solution to $h'(t,x_t)=0$ . We will assume that this gives a minimum for $h(t,x)$ as a function of $x$ . Using this expansion in the integral and forgetting about the $\dotsm$ part, gives $$ e^{K(t)} \approx \int_{-\infty}^\infty \exp(-h(t,x_t)-\frac12 h''(t,x_t) (x-x_t)^2 ) \; dx \\ = e^{-h(t,x_t)} \int_{-\infty}^\infty e^{-\frac12 h''(t,x_t) (x-x_t)^2} \; dx $$ which is a Gaussian integral, giving $$ e^{K(t)} \approx e^{-h(t,x_t)} \sqrt{\frac{2\pi}{h''(t,x_t)}}. $$ This gives (a first version) of the saddlepoint approximation as $$ f(x_t) \approx \sqrt{\frac{h''(t,x_t)}{2\pi}} \exp(K(t) -t x_t) \\ \tag{*} \label{*} $$ Note that the approximation has the form of an exponential family. Now we need to do some work to get this in a more useful form. From $h'(t,x_t)=0$ we get $$ t = -\frac{\partial}{\partial x_t} \log f(x_t). $$ Differentiating this with respect to $x_t$ gives $$ \frac{\partial t}{\partial x_t} = -\frac{\partial^2}{\partial x_t^2} \log f(x_t) > 0$$ (by our assumptions), so the relationship between $t$ and $x_t$ is monotone, so $x_t$ is well defined. We need an approximation to $\frac{\partial}{\partial x_t} \log f(x_t)$ . To that end, we get by solving from \eqref{*} $$ \log f(x_t) = K(t) -t x_t -\frac12 \log \frac{2\pi}{-\frac{\partial^2}{\partial x_t^2} \log f(x_t)}. \tag{**} \label{**} $$ Assuming the last term above only depends weakly on $x_t$ , so its derivative with respect to $x_t$ is approximately zero (we will come back to comment on this), we get $$ \frac{\partial \log f(x_t)}{\partial x_t} \approx (K'(t)-x_t) \frac{\partial t}{\partial x_t} - t $$ Up to this approximation we then have that $$ 0 \approx t + \frac{\partial \log f(x_t)}{\partial x_t} = (K'(t)-x_t) \frac{\partial t}{\partial x_t} $$ so that $t$ and $x_t$ must be related through the equation $$ K'(t) - x_t=0, \\ \tag{Β§} \label{Β§} $$ which is called the saddlepoint equation. What we miss now in determining \eqref{*} is $$ h''(t,x_t) = -\frac{\partial^2 \log f(x_t)}{\partial x_t^2} \\ = -\frac{\partial}{\partial x_t} \left(\frac{\partial \log f(x_t)}{\partial x_t} \right) \\ = -\frac{\partial}{\partial x_t}(-t)= \left(\frac{\partial x_t}{\partial t}\right)^{-1} $$ and that we can find by implicit differentiation of the saddlepoint equation $K'(t)=x_t$ : $$ \frac{\partial x_t}{\partial t} = K''(t). $$ The result is that (up to our approximation) $$ h''(t,x_t) = \frac1{K''(t)} $$ Putting everything together, we have the final saddlepoint approximation of the density $f(x)$ as $$ f(x_t) \approx e^{K(t)- t x_t} \sqrt{\frac1{2\pi K''(t)}}. $$ Now, to use this practically, to approximate the density at a specific point $x_t$ , we solve the saddlepoint equation for that $x_t$ to find $t$ . The saddlepoint approximation is often stated as an approximation to the density of the mean based on $n$ iid observations $X_1, X_2, \dotsc, X_n$ . The cumulant generating function of the mean is simply $n K(t)$ , so the saddlepoint approximation for the mean becomes $$ f(\bar{x}_t) = e^{nK(t) - n t \bar{x}_t} \sqrt{\frac{n}{2\pi K''(t)}} $$ Let us look at a first example. What does we get if we try to approximate the standard normal density $$ f(x)=\frac1{\sqrt{2\pi}} e^{-\frac12 x^2} $$ The mgf is $M(t)=\exp(\frac12 t^2)$ so $$ K(t)=\frac12 t^2 \\ K'(t)=t \\ K''(t)=1 $$ so the saddlepoint equation is $t=x_t$ and the saddlepoint approximation gives $$ f(x_t) \approx e^{\frac12 t^2 -t x_t} \sqrt{\frac1{2\pi \cdot 1}} = \frac1{\sqrt{2\pi}} e^{-\frac12 x_t^2} $$ so in this case the approximation is exact. Let us look at a very different application: Bootstrap in the transform domain, we can do bootstrapping analytically using the saddlepoint approximation to the bootstrap distribution of the mean! Assume we have $X_1, X_2, \dotsc, X_n$ iid distributed from some density $f$ (in the simulated example we will use a unit exponential distribution). From the sample we calculate the empirical moment generating function $$ \hat{M}(t)= \frac1{n} \sum_{i=1}^n e^{t x_i} $$ and then the empirical cgf $\hat{K}(t) = \log \hat{M}(t)$ . We need the empirical mgf for the mean which is $\log ( \hat{M}(t/n)^n )$ and the empirical cgf for the mean $$ \hat{K}_{\bar{X}}(t) = n \log \hat{M}(t/n) $$ which we use to construct a saddlepoint approximation. In the following some R code (R version 3.2.3): set.seed(1234) x <- rexp(10) require(Deriv) ### From CRAN drule[["sexpmean"]] <- alist(t=sexpmean1(t)) # adding diff rules to # Deriv drule[["sexpmean1"]] <- alist(t=sexpmean2(t)) ### make_ecgf_mean <- function(x) { n <- length(x) sexpmean <- function(t) mean(exp(t*x)) sexpmean1 <- function(t) mean(x*exp(t*x)) sexpmean2 <- function(t) mean(x*x*exp(t*x)) emgf <- function(t) sexpmean(t) ecgf <- function(t) n * log( emgf(t/n) ) ecgf1 <- Deriv(ecgf) ecgf2 <- Deriv(ecgf1) return( list(ecgf=Vectorize(ecgf), ecgf1=Vectorize(ecgf1), ecgf2 =Vectorize(ecgf2) ) ) } ### Now we need a function solving the saddlepoint equation and constructing ### the approximation: ### make_spa <- function(cumgenfun_list) { K <- cumgenfun_list[[1]] K1 <- cumgenfun_list[[2]] K2 <- cumgenfun_list[[3]] # local function for solving the speq: solve_speq <- function(x) { # Returns saddle point! uniroot(function(s) K1(s)-x,lower=-100, upper = 100, extendInt = "yes")$root } # Function finding fhat for one specific x: fhat0 <- function(x) { # Solve saddlepoint equation: s <- solve_speq(x) # Calculating saddlepoint density value: (1/sqrt(2*pi*K2(s)))*exp(K(s)-s*x) } # Returning a vectorized version: return(Vectorize(fhat0)) } #end make_fhat ( I have tried to write this as general code which can be modified easily for other cgfs, but the code is still not very robust ...) Then we use this for a sample of ten independent observations from a unit exponential distribution. We do the usual nonparametric bootstrapping "by hand", plot the resulting bootstrap histogram for the mean, and overplot the saddlepoint approximation: > ECGF <- make_ecgf_mean(x) > fhat <- make_spa(ECGF) > fhat function (x) { args <- lapply(as.list(match.call())[-1L], eval, parent.frame()) names <- if (is.null(names(args))) character(length(args)) else names(args) dovec <- names %in% vectorize.args do.call("mapply", c(FUN = FUN, args[dovec], MoreArgs = list(args[!dovec]), SIMPLIFY = SIMPLIFY, USE.NAMES = USE.NAMES)) } <environment: 0x4e5a598> > boots <- replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE) > boots <- replicate(10000, mean(sample(x, length(x), replace=TRUE)), simplify=TRUE) > hist(boots, prob=TRUE) > plot(fhat, from=0.001, to=2, col="red", add=TRUE) Giving the resulting plot: The approximation seems to be rather good! We could get an even better approximation by integrating the saddlepoint approximation and rescaling: > integrate(fhat, lower=0.1, upper=2) 1.026476 with absolute error < 9.7e-07 Now the cumulative distribution function based on this approximation could be found by numerical integration, but it is also possible to make a direct saddlepoint approximation for that. But that is for another post, this is long enough. Finally, some comments left out of the development above. In \eqref{**} we did an approximation essentially ignoring the third term. Why can we do that? One observation is that for the normal density function, the left-out term contributes nothing, so that approximation is exact. So, since the saddlepoint-approximation is a refinement on the central limit theorem, so we are somewhat close to the normal, so this should work well. One can also look at specific examples. Looking at the saddlepoint approximation to the Poisson distribution, looking at that left-out third term, in this case that becomes a trigamma function, which indeed is rather flat when the argument is not to close to zero. Finally, why the name? The name come from an alternative derivation, using complex-analysis techniques. Later we can look into that, but in another post!
{ "source": [ "https://stats.stackexchange.com/questions/191492", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/805/" ] }
191,851
I am building a VAR model to forecast the price of an asset and would like to know whether my method is statistically sound, whether the tests I have included are relevant and if more are needed to ensure a reliable forecast based on my input variables. Below is my current process to check for Granger causality and forecast the selected VAR model. require("forecast") require("vars") #Read Data da=read.table("VARdata.txt", header=T) dac <- c(2,3) # Select variables x=da[,dac] plot.ts(x) summary(x) #Run Augmented Dickey-Fuller tests to determine stationarity and differences to achieve stationarity. ndiffs(x[, "VAR1"], alpha = 0.05, test = c("adf")) ndiffs(x[, "VAR2"], alpha = 0.05, test = c("adf")) #Difference to achieve stationarity d.x1 = diff(x[, "VAR1"], differences = 2) d.x2 = diff(x[, "VAR2"], differences = 2) dx = cbind(d.x1, d.x2) plot.ts(dx) #Lag optimisation VARselect(dx, lag.max = 10, type = "both") #Vector autoregression with lags set according to results of lag optimisation. var = VAR(dx, p=2) #Test for serial autocorrelation using the Portmanteau test #Rerun var model with other suggested lags if H0 can be rejected at 0.05 serial.test(var, lags.pt = 10, type = "PT.asymptotic") #ARCH test (Autoregressive conditional heteroscedasdicity) arch.test(var, lags.multi = 10) summary(var) #Granger Causality test #Does x1 granger cause x2? grangertest(d.x2 ~ d.x1, order = 2) #Does x2 granger cause x1? grangertest(d.x1 ~ d.x2, order = 2) #Forecasting prd <- predict(var, n.ahead = 10, ci = 0.95, dumvar = NULL) print(prd) plot(prd, "single") Is this method sound?
I think you got it pretty right, but when building a VAR model, I usually make sure I follow these steps: 1. Select the variables This is the most important part of building your model. If you want to forecast the price of an asset, you need to include variables that are related with the mechanism of price formation. The best way to do this is through a theoretical model. Since you did not mention what is the asset and what are the other variables you included in your model I really cannot say much about this item, but you can find a summary of asset pricing models in here . 2. Check the data and make the proper adjustments Once you select the variables, you can make some adjustments to the data that will improve the estimation and interpretation of the model. It is useful to use summary statistics and see a plot of the series to detect outliers, missing data and other strange behaviors. When working with price data, people usually take natural logs, which is a variance-stabilizing transformation and also has a good interpretation (price difference in logs become continuously compound returns). I'm not sure if you have taken logs before estimating the model, but it is a good idea to do so if you are working with asset prices. 3. Check if data contains non-stationary components Now you can use unit root tests to check if your series are stationary. If you are only interested in forecasting, as noted by @JacobH, you can run VAR in levels even when your series are non-stationary, but then your standard errors cannot be trusted, meaning that you can't make inference about the value of the coefficients. You've tested stationary using the ADF test, which is very commonly used in these applications, but note that you should specify if you want to run the test with i) no constant and no trend; ii) a constant and no trend; and iii) a constant and a trend. Usually price series have stochastic trends, so a linear trend will not be accurate. In this case you may choose the specification ii. In your code you used the ndiffs function of the forecast package. I am not sure which of those three alternatives this function implements in order to calculate the number of differences (I couldn't find it in the documentation). To check your result you may want to use the ur.df function in the "urca" package: adf <- ur.df(x[, "VAR1"], type = "drift", lags = 10, selectlags = "AIC") Note that this command will run the ADF test with a constant and the lags selected by the AIC command, with maximum lag of 10. If you have problems interpreting the results just look at this question . If the series are I(1) just use the difference, which will be equal to the continuously compounded returns. If the test indicates that the series are I(2) and you are in doubt about that you can use other tests, e.g. Phillips-Perron test ( PP.test function in R). If all tests confirm that your series are I(2) (remember to use the log of the series before running the tests) then take the second difference, but note that your interpretation of the results will change, since now you are working with the difference of the continuously compounded returns. Prices of assets are usually I(1) since they are close to a random walk, which is a white noise when applying the first difference. 4. Select the order of the model This can be done with commonly used criteria such as Akaike, Schwarz (BIC) and Hannan-Quinn. You've done that with the VARselect function and that is right, but remember what is the criterion that you used to make your decision. Usually different criteria indicate different orders for the VAR. 5. Check if there are cointegrating relationships If all your series are I(1) or I(2), before running a VAR model, it is usually a good idea to check if there is no cointegration relationships between the series, specially if you want to make impulse response analysis with the residuals. You can do that using the Johansenn test or the Engle-Granger (only for bivariate models). In R you can run the Johansen test with the ca.jo function of the "urca" package. Note that this test also has different specifications. For price series I usually use the following code (where p is the lag length of item 4, performed with the series in levels): jo_eigen <- ca.jo(x, type = "eigen", ecdet = "const", K = p) jo_trace <- ca.jo(x, type = "trace", ecdet = "const", K = p) 6. Estimate the model If your series are not cointegrated, you can easily estimate the model with the VAR command, as done in your code. In case the series are cointegrated you need to consider the long run relationship by estimating a Vector Error Correction model with the following code (where k is the order of cointegration): vecm <- cajorls(joeigen, r = k) 7. Run diagnostics tests To test if your model is well specified you can run a test of serial correlation on the residuals. In your code you used a Portmanteau test with the serial.test function. I've never used this function but I think it is OK. There is also a multivariate version of the Ljung-Box test implemented in the package MTS which you can run with the function mq . 8. Make predictions After you are sure your model is well specified you can use the predict function as you did in your code. You can even plot impulse response functions to check how the variables respond to a particular shock using the irf function. 9. Evaluate predictions Once you made your predictions you must evaluate them and compare against other models. Some methods to evaluate accuracy of forecasts can be found here , but to do that it is crucial that you divide your series in a training and a test set, as explained in the link.
{ "source": [ "https://stats.stackexchange.com/questions/191851", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/91349/" ] }
191,917
I feel like I've seen this topic discussed here before, but I wasn't able to find anything specific. Then again, I'm also not really sure what to search for. I have a one dimensional set of ordered data. I hypothesize that all of the points in the set are drawn from the same distribution. How can I test this hypothesis? Is it reasonable to test against a general alternative of "the observations in this data set are drawn from two different distributions"? Ideally, I would like to identify which points come from the "other" distribution. Since my data is ordered, could I get away with identifying a cut point, after somehow testing whether it's "valid" to cut the data? Edit: as per Glen_b's answer, I would be interested in strictly positive, unimodal distributions. I'd also be interested in the special case of assuming a distribution and then testing for different parameters .
Imagine two scenarios: the data points were all drawn from the same distribution -- one that was uniform on (16,36) the data points were drawn from a 50-50 mix of two populations: a. population A, which is shaped like this: b. population B, shaped like this: ... such that the mixture of the two looks exactly like the case in 1. How could they be told apart? Whatever shapes you choose for two populations, there's always going to be a single population distribution that has the same shape. This argument clearly demonstrates that for the general case you simply can't do it. There's no possible way to differentiate. If you introduce information about the populations (assumptions, effectively) then there may often be ways to proceed*, but the general case is dead. * e.g. if you assume that populations are unimodal and have sufficiently different means you can get somewhere [There restrictions that were added to the question are not sufficient to avoid a different version of the kind of problem I describe above -- we can still write a unimodal null on the positive half-line as a 50-50 mixture of two unimodal distributions on the positive half-line. Of course if you have a more specific null, this becomes much less of an issue. Alternatively it should still be possible to restrict the class of alternatives further until we were in a position to test against some mixture alternative. Or some additional restrictions might be applied to both null and alternative that would make them distinguishable.]
{ "source": [ "https://stats.stackexchange.com/questions/191917", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/36229/" ] }
192,948
What is "logistic" about the logistic distribution, in a common sense way? What is the etymology of and the lexical rationale for the name, not just pure math definition?
The source document for the name "logistic" seems to be this 1844 presentation by P.-F. Verhulst, "Recherches mathématiques sur la loi d'accroissement de la population," in NOUVEAUX MÉMOIRES DE L'ACADÉMIE ROYALE DES SCIENCES ET BELLES-LETTRES DE BRUXELLES, vol. 18, p 3. He differentiated what we would now call exponential growth of population when resources are essentially unlimited (as seen for example in the growth of the US population in the late 18th and early 19th centuries) from the slower growth when resource limits begin to be reached. What we call exponential growth, however, he called a " logarithmique " curve (page 6). He then developed a formula for population growth in the presence of resource limits, and said of the resulting curve: "Nous donnerons le nom de logistique à la courbe..." which I translate as "We call the curve logistic ..." (original emphasis). That would seem to be intended to distinguish this growth pattern from the " logarithmique " growth in the absence of resource limits, as the figure at the end of the paper illustrates. The specific form of the equation presented by Verhulst allows for an arbitrary upper asymptote (eq. 5, page 9), while the form we know and love in statistics is the specific case with an asymptote of 1.
{ "source": [ "https://stats.stackexchange.com/questions/192948", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/102242/" ] }
193,082
Well the question says it all. What is meant by "pre training a neural network"? Can someone explain in pure simple English? I can't seem to find any resources related to it. It would be great if someone can point me to them.
The usual way of training a network: You want to train a neural network to perform a task (e.g. classification) on a data set (e.g. a set of images). You start training by initializing the weights randomly. As soon as you start training, the weights are changed in order to perform the task with less mistakes (i.e. optimization). Once you're satisfied with the training results you save the weights of your network somewhere. You are now interested in training a network to perform a new task (e.g. object detection) on a different data set (e.g. images too but not the same as the ones you used before). Instead of repeating what you did for the first network and start from training with randomly initialized weights, you can use the weights you saved from the previous network as the initial weight values for your new experiment. Initializing the weights this way is referred to as using a pre-trained network. The first network is your pre-trained network. The second one is the network you are fine-tuning. The idea behind pre-training is that random initialization is...well...random, the values of the weights have nothing to do with the task you're trying to solve. Why should a set of values be any better than another set? But how else would you initialize the weights? If you knew how to initialize them properly for the task, you might as well set them to the optimal values (slightly exaggerated). No need to train anything. You have the optimal solution to your problem. Pre-training gives the network a head start. As if it has seen the data before. What to watch out for when pre-training: The first task used in pre-training the network can be the same as the fine-tuning stage. The datasets used for pre-training vs. fine-tuning can also be the same, but can also be different. It's really interesting to see how pre-training on a different task and different dataset can still be transferred to a new dataset and new task that are slightly different. Using a pre-trained network generally makes sense if both tasks or both datasets have something in common. The bigger the gap, the less effective pre-training will be. It makes little sense to pre-train a network for image classification by training it on financial data first. In this case there's too much disconnect between the pre-training and fine-tuning stages.
{ "source": [ "https://stats.stackexchange.com/questions/193082", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/101559/" ] }
193,306
Gradient descent and many other methods are useful for finding local minima in cost functions. They can be efficient when the cost function can be evaluated quickly at each point, whether numerically or analytically. I have what appears to me to be an unusual situation. Each evaluation of my cost function is expensive. I am attempting to find a set of parameters that minimize a 3D surface against ground truth surfaces. Whenever I change a parameter, I need to run the algorithm against the entire sample cohort to measure its effect. In order to calculate a gradient, I need to change all 15 parameters independently, meaning I have to regenerate all the surfaces and compare against the sample cohort way too many times per gradient, and definitely way too many times over the course of optimization. I have developed a method to circumvent this problem and am currently evaluating it, but I am surprised that I have not found much in the literature regarding expensive cost function evaluations. This makes me wonder if I am making the problem harder than it is and that there might be a better way already available. So my questions are basically this: Does anyone know of methods for optimizing cost functions, convex or not, when evaluation is slow? Or, am I doing something silly in the first place by rerunning the algorithm and comparing against the sample cohort so many times?
TL;DR I recommend using LIPO. It is provably correct and provably better than pure random search (PRS). It is also extremely simple to implement, and has no hyperparameters. I have not conducted an analysis that compares LIPO to BO, but my expectation is that the simplicity and efficiency of LIPO imply that it will out-perform BO. (See also: What are some of the disavantage of bayesian hyper parameter optimization? ) LIPO and its Variants This is an exciting arrival which, if it is not new , is certainly new to me. It proceeds by alternating between placing informed bounds on the function, and sampling from the best bound, and using quadratic approximations. I'm still working through all the details, but I think this is very promising. This is a nice blog write-up , and the paper is CΓ©dric Malherbe and Nicolas Vayatis " Global optimization of Lipschitz functions ." LIPO is most useful when the number of hyper-parameters that you are searching over is small. Bayesian Optimization Bayesian Optimization-type methods build Gaussian process surrogate models to explore the parameter space. The main idea is that parameter tuples that are closer together will have similar function values, so the assumption of a co-variance structure among points allows the algorithm to make educated guesses about what best parameter tuple is most worthwhile to try next. This strategy helps to reduce the number of function evaluations; in fact, the motivation of BO methods is to keep the number of function evaluations as low as possible while "using the whole buffalo" to make good guesses about what point to test next. There are different figures of merit (expected improvement, expected quantile improvement, probability of improvement...) which are used to compare points to visit next. Contrast this to something like a grid search, which will never use any information from its previous function evaluations to inform where to go next. Incidentally, this is also a powerful global optimization technique, and as such makes no assumptions about the convexity of the surface. Additionally, if the function is stochastic (say, evaluations have some inherent random noise), this can be directly accounted for in the GP model. On the other hand, you'll have to fit at least one GP at every iteration (or several, picking the "best", or averaging over alternatives, or fully Bayesian methods). Then, the model is used to make (probably thousands) of predictions, usually in the form of multistart local optimization, with the observation that it's much cheaper to evaluate the GP prediction function than the function under optimization. But even with this computational overhead, it tends to be the case that even nonconvex functions can be optimized with a relatively small number of function calls. A downside to GP is that the number of iterations to get a good result tends to grow with the number of hyper-parameters to search over. A widely-cited paper on the topic is Jones et al (1998) , "Efficient Global Optimization of Expensive Black-Box Functions." But there are many variations on this idea. Random Search Even when the cost function is expensive to evaluate, random search can still be useful. Random search is dirt-simple to implement. The only choice for a researcher to make is setting the the probability $p$ that you want your results to lie in some quantile $q$ ; the rest proceeds automatically using results from basic probability. Suppose your quantile is $q = 0.95$ and you want a $p=0.95$ probability that the model results are in top $100\times (1-q)=5$ percent of all hyperparameter tuples. The probability that all $n$ attempted tuples are not in that window is $q^n = 0.95^n$ (because they are chosen independently at random from the same distribution), so the probability that at least one tuple is in that region is $1 - 0.95^n$ . Putting it all together, we have $$ 1 - q^n \ge p \implies n \ge \frac{\log(1 - p)}{\log(q)} $$ which in our specific case yields $n \ge 59$ . This result is why most people recommend $n=60$ attempted tuples for random search. It's worth noting that $n=60$ is comparable to the number of experiments required to get good results with Gaussian Process-based methods when there are a moderate number of parameters. Unlike Gaussian Processes, for random search, the number of queried tuples does not grow with the number of hyper-parameters to search over. Indee,d the dimension of the problem does not appear in the expression that recommends attempting $n=60$ random values. However, this does not mean that random search is "immune" to curse of dimensionality. Increasing the dimension of the hyperparameter search space can mean that the average result drawn from among the "best 5% of values" is still very poor. More information: The "Amazing Hidden Power" of Random Search? The intuition is that if we increase the volume of the search space, then we are naturally also increasing the volume of 5% of the search space. Since you have a probabilistic characterization of how good the results are, this result can be a persuasive tool to convince your boss that running additional experiments will yield diminishing marginal returns.
{ "source": [ "https://stats.stackexchange.com/questions/193306", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/89812/" ] }
193,908
I'm taking Andrew Ng's course on Machine Learning through Coursera . For equations, superscripts are used instead of subscripts. For example, in the following equation $x^{(i)}$ is used instead of $x_i$: $J(\theta_0, \theta_1) = \frac{1}{2m} \sum\limits_{i=1}^{m}{(h_\theta(x^{(i)}) - y^{(i)})^2}$ Apparently, this is common practice. My question is why use superscripts instead of subscripts? Superscripts are already used for exponentiation. Granted I seem to be able to disambiguate between the superscript and exponentiation use cases by paying attention to whether or not parentheses are present, but it still seems confusing.
If $x$ denotes a vector $x \in \mathbb R^m$ then $x_i$ is a standard notation for the $i$-th coordinate of $x$, i.e. $$x = (x_1, x_2, \ldots, x_m)\in\mathbb R^m.$$ If you have a collection of $n$ such vectors, how would you denote an $i$-th vector? You cannot write $x_i$, this has other standard meaning. So sometimes people write $x^{(i)}$ and that is I believe why Andrew Ng does it. I.e. \begin{equation} x^{(1)} = (x_1^{(1)}, x_2^{(1)}, \ldots, x_m^{(1)}) \in \mathbb R^m\\ x^{(2)} = (x_1^{(2)}, x_2^{(2)}, \ldots, x_m^{(2)}) \in \mathbb R^m\\ \ldots \\ x^{(n)} = (x_1^{(n)}, x_2^{(n)}, \ldots, x_m^{(n)}) \in \mathbb R^m.\\ \end{equation}
{ "source": [ "https://stats.stackexchange.com/questions/193908", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/102930/" ] }
193,990
I've been looking at Monte Carlo simulation recently, and have been using it to approximate constants such as $\pi$ (circle inside a rectangle, proportionate area). However, I'm unable to think of a corresponding method of approximating the value of $e$ [Euler's number] using Monte Carlo integration. Do you have any pointers on how this can be done?
The simple and elegant way to estimate $e$ by Monte Carlo is described in this paper . The paper is actually about teaching $e$ . Hence, the approach seems perfectly fitting for your goal. The idea's based on an exercise from a popular Ukrainian textbook on probability theory by Gnedenko. See ex.22 on p.183 It happens so that $E[\xi]=e$ , where $\xi$ is a random variable that is defined as follows. It's the minimum number of $n$ such that $\sum_{i=1}^n r_i>1$ and $r_i$ are random numbers from uniform distribution on $[0,1]$ . Beautiful, isn't it?! Since it's an exercise, I'm not sure if it's cool for me to post the solution (proof) here :) If you'd like to prove it yourself, here's a tip: the chapter is called "Moments", which should point you in right direction. If you want to implement it yourself, then don't read further! This is a simple algorithm for Monte Carlo simulation. Draw a uniform random, then another one and so on until the sum exceeds 1. The number of randoms drawn is your first trial. Let's say you got: 0.0180 0.4596 0.7920 Then your first trial rendered 3. Keep doing these trials, and you'll notice that in average you get $e$ . MATLAB code, simulation result and the histogram follow. N = 10000000; n = N; s = 0; i = 0; maxl = 0; f = 0; while n > 0 s = s + rand; i = i + 1; if s > 1 if i > maxl f(i) = 1; maxl = i; else f(i) = f(i) + 1; end i = 0; s = 0; n = n - 1; end end disp ((1:maxl)*f'/sum(f)) bar(f/sum(f)) grid on f/sum(f) The result and the histogram: 2.7183 ans = Columns 1 through 8 0 0.5000 0.3332 0.1250 0.0334 0.0070 0.0012 0.0002 Columns 9 through 11 0.0000 0.0000 0.0000 UPDATE: I updated my code to get rid of the array of trial results so that it doesn't take RAM. I also printed the PMF estimation. Update 2: Here's my Excel solution. Put a button in Excel and link it to the following VBA macro: Private Sub CommandButton1_Click() n = Cells(1, 4).Value Range("A:B").Value = "" n = n s = 0 i = 0 maxl = 0 Cells(1, 2).Value = "Frequency" Cells(1, 1).Value = "n" Cells(1, 3).Value = "# of trials" Cells(2, 3).Value = "simulated e" While n > 0 s = s + Rnd() i = i + 1 If s > 1 Then If i > maxl Then Cells(i, 1).Value = i Cells(i, 2).Value = 1 maxl = i Else Cells(i, 1).Value = i Cells(i, 2).Value = Cells(i, 2).Value + 1 End If i = 0 s = 0 n = n - 1 End If Wend s = 0 For i = 2 To maxl s = s + Cells(i, 1) * Cells(i, 2) Next Cells(2, 4).Value = s / Cells(1, 4).Value Rem bar (f / Sum(f)) Rem grid on Rem f/sum(f) End Sub Enter the number of trials, such as 1000, in the cell D1, and click the button. Here how the screen should look like after the first run: UPDATE 3: Silverfish inspired me to another way, not as elegant as the first one but still cool. It calculated the volumes of n-simplexes using Sobol sequences. s = 2; for i=2:10 p=sobolset(i); N = 10000; X=net(p,N)'; s = s + (sum(sum(X)<1)/N); end disp(s) 2.712800000000001 Coincidentally he wrote the first book on Monte Carlo method I read back in high school. It's the best introduction to the method in my opinion. UPDATE 4: Silverfish in comments suggested a simple Excel formula implementation. This is the kind of result you get with his approach after about total 1 million random numbers and 185K trials: Obviously, this is much slower than Excel VBA implementation. Especially, if you modify my VBA code to not update the cell values inside the loop, and only do it once all stats are collected. UPDATE 5 Xi'an's solution #3 is closely related (or even the same in some sense as per jwg's comment in the thread). It's hard to say who came up with the idea first Forsythe or Gnedenko. Gnedenko's original 1950 edition in Russian doesn't have Problems sections in Chapters. So, I couldn't find this problem at a first glance where it is in later editions. Maybe it was added later or buried in the text. As I commented in Xi'an's answer, Forsythe's approach is linked to another interesting area: the distribution of distances between peaks (extrema) in random (IID) sequences. The mean distance happens to be 3. The down sequence in Forsythe's approach ends with a bottom, so if you continue sampling you'll get another bottom at some point, then another etc. You could track the distance between them and build the distribution.
{ "source": [ "https://stats.stackexchange.com/questions/193990", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/96530/" ] }
194,035
Background : I do not have an formal training in Bayesian statistics (though I am very interested in learning more), but I know enough--I think--to get the gist of why many feel as though they are preferable to Frequentist statistics. Even the undergraduates in the introductory statistics (in social sciences) class I am teaching find the Bayesian approach appealing--"Why are we interested in calculating the probability of the data, given the null? Why can't we just quantify the probability of the null hypothesis? Or the alternative hypothesis? And I've also read threads like these , which attest to the empirical benefits of Bayesian statistics as well. But then I came across this quote by Blasco (2001; emphasis added): If the animal breeder is not interested in the philosophical problems associated with induction, but in tools to solve problems, both Bayesian and frequentist schools of inference are well established and it is not necessary to justify why one or the other school is preferred. Neither of them now has operational difficulties, with the exception of some complex cases... To choose one school or the other should be related to whether there are solutions in one school that the other does not offer , to how easily the problems are solved, and to how comfortable the scientist feels with the particular way of expression results. The Question : The Blasco quote seems to suggest that there might be times when a Frequentist approach is actually preferable to a Bayesian one. And so I am curious: when would a frequentist approach be preferable over a Bayesian approach? I'm interested in answers that tackle the question both conceptually (i.e., when is knowing the probability of the data conditioned on the null hypothesis especially useful?) and empirically (i.e., under what conditions do Frequentist methods excel vs. Bayesian?). It would also be preferable if answers were conveyed as accessibly as possible--it would be nice to take some responses back to my class to share with my students (though I understand some level of technicality is required). Finally, despite being a regular user of Frequentist statistics, I am actually open to the possibility that Bayesian just wins across the board.
Here's five reasons why frequentists methods may be preferred: Faster. Given that Bayesian statistics often give nearly identical answers to frequentist answers (and when they don't, it's not 100% clear that Bayesian is always the way to go), the fact that frequentist statistics can be obtained often several orders of magnitude faster is a strong argument. Likewise, frequentist methods do not require as much memory to store the results. While these things may seem somewhat trivial, especially with smaller datasets, the fact that Bayesian and Frequentist typically agree in results (especially if you have lots of informative data) means that if you are going to care, you may start caring about the less important things. And of course, if you live in the big data world, these are not trivial at all. Non-parametric statistics. I recognize that Bayesian statistics does have non-parametric statistics, but I would argue that the frequentist side of the field has some truly undeniably practical tools, such as the Empirical Distribution Function. No method in the world will ever replace the EDF, nor the Kaplan Meier curves, etc. (although clearly that's not to say those methods are the end of an analysis). Less diagnostics. MCMC methods, the most common method for fitting Bayesian models, typically require more work by the user than their frequentist counter part. Usually, the diagnostic for an MLE estimate is so simple that any good algorithm implementation will do it automatically (although that's not to say every available implementation is good...). As such, frequentist algorithmic diagnostics is typically "make sure there's no red text when fitting the model". Given that all statisticians have limited bandwidth, this frees up more time to ask questions like "is my data really approximately normal?" or "are these hazards really proportional?", etc. Valid inference under model misspecification. We've all heard that "All models are wrong but some are useful", but different areas of research take this more or less seriously. The Frequentist literature is full of methods for fixing up inference when the model is misspecified: bootstrap estimator, cross-validation, sandwich estimator (link also discusses general MLE inference under model misspecification), generalized estimation equations (GEE's), quasi-likelihood methods, etc. As far as I know , there is very little in the Bayesian literature about inference under model misspecification (although there's a lot of discussion of model checking, i.e., posterior predictive checks). I don't think this just by chance: evaluating how an estimator behaves over repeated trials does not require the estimator to be based on a "true" model, but using Bayes theorem does! Freedom from the prior (this is probably the most common reason for why people don't use Bayesian methods for everything). The strength of the Bayesian standpoint is often touted as the use of priors. However, in all of the applied fields I have worked in, the idea of an informative prior in the analysis is not considered. Reading literature on how to elicit priors from non-statistical experts gives good reasoning for this; I've read papers that say things like (cruel straw-man like paraphrasing my own) "Ask the researcher who hired you because they have trouble understanding statistics to give a range that they are 90% certain the effect size they have trouble imagining will be in. This range will typically be too narrow, so arbitrarily try to get them to widen it a little. Ask them if their belief looks like a gamma distribution. You will probably have to draw a gamma distribution for them, and show how it can have heavy tails if the shape parameter is small. This will also involve explaining what a PDF is to them."(note: I don't think even statisticians are really able to accurately say a priori whether they are 90% or 95% certain whether the effect size lies in a range, and this difference can have a substantial effect on the analysis!). Truth be told, I'm being quite unkind and there may be situations where eliciting a prior may be a little more straightforward. But you can see how this is a can of worms. Even if you switch to non-informative priors, it can still be a problem; when transforming parameters, what are easily mistaken for non-informative priors suddenly can be seen as very informative! Another example of this is that I've talked with several researchers who adamantly do not want to hear what another expert's interpretation of the data is because empirically, the other experts tend to be over confident. They'd rather just know what can be inferred from the other expert's data and then come to their own conclusion. I can't recall where I heard it, but somewhere I read the phrase "if you're a Bayesian, you want everyone to be a Frequentist". I interpret that to mean that theoretically, if you're a Bayesian and someone describes their analysis results, you should first try to remove the influence of their prior and then figure out what the impact would be if you had used your own. This little exercise would be simplified if they had given you a confidence interval rather than a credible interval! Of course, if you abandon informative priors, there is still utility in Bayesian analyses. Personally, this where I believe their highest utility lies; there are some problems that are extremely hard to get any answer from in using MLE methods but can be solved quite easily with MCMC. But my view on this being Bayesian's highest utility is due to strong priors on my part, so take it with a grain of salt.
{ "source": [ "https://stats.stackexchange.com/questions/194035", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53456/" ] }
194,056
There's substantial contemporary research on Bayesian Optimization (1) for tuning ML hyperparameters. The driving motivation here is that a minimal number of data points are required to make informed choices about what points are worthwhile to try (objective function calls are expensive, so making fewer is better) because training a model is time-intensive -- some modestly-large SVM problems that I've worked on can take between minutes and hours to complete. On the other hand, Optunity is a particle swarm implementation to address for the same task. I'm not overwhelmingly familiar with PSO, but it seems like it must be less efficient in the sense of requiring a larger number of trial points, and therefore objective function evaluations, to assess the hyperparameter surface. Am I missing a key detail that makes PSO preferred to BO in the machine learning context? Or is the choice between the two always inherently contextual for the hyperparameter tuning task? (1) Shahriari et al, "Taking the Human out of the Loop: A Review of Bayesian Optimizaiton."
As the lead developer of Optunity I'll add my two cents. We have done extensive benchmarks comparing Optunity with the most popular Bayesian solvers (e.g., hyperopt, SMAC, bayesopt) on real-world problems, and the results indicate that PSO is in fact not less efficient in many practical cases. In our benchmark, which consists of tuning SVM classifiers on various datasets, Optunity is actually more efficient than hyperopt and SMAC, but slightly less efficient than BayesOpt. I would love to share the results here, but I'm going to wait until Optunity is finally published in JMLR (under review for over a year now, so don't hold your breath ...). As you indicate, increased efficiency is a commonly used selling point for Bayesian optimization, but in practice it only holds water if the assumptions of the underlying surrogate models hold, which is far from trivial. In our experiments, Optunity's very simple PSO solver is often competitive with complex Bayesian approaches in terms of number of function evaluations. Bayesian solvers work very well when provided with good priors, but with an uninformative prior there is virtually no structural benefit over metaheuristic methods like PSO in terms of efficiency. A big selling point for PSO is the fact it's embarassingly parallel. Bayesian optimization is often hard to parallelize, due to its inherently sequential nature (hyperopt's implementation being the only real exception). Given opportunities to distribute, which is becoming the norm, Optunity quickly takes the lead in wall-clock time to obtain good solutions. Another key difference between Optunity and most other dedicated hyperparameter optimization libraries is the target audience: Optunity has the simplest interface and is targetted towards non-machine learning experts, whereas most other libraries require some understanding of Bayesian optimization to use effectively (i.e., they are targetted towards specialists). The reason we made the library is that despite the fact dedicated hyperparameter optimization methods exist, they lack adoption in practice. Most people are still either not tuning at all, doing it manually, or via naive approaches like grid or random search. In our opinion, a key reason for this is the fact that existing libraries prior to developing Optunity were too difficult to use in terms of installation, documentation, API and often limited to a single environment.
{ "source": [ "https://stats.stackexchange.com/questions/194056", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22311/" ] }
194,142
I am currently doing the Udacity Deep Learning Tutorial. In Lesson 3, they talk about a 1x1 convolution. This 1x1 convolution is used in Google Inception Module. I'm having trouble understanding what is a 1x1 convolution. I have also seen this post by Yann Lecun. Could someone kindly explain this to me?
Suppose that I have a conv layer which outputs an $(N, F, H, W)$ shaped tensor where: $N$ is the batch size $F$ is the number of convolutional filters $H, W$ are the spatial dimensions Suppose the input is fed into a conv layer with $F_1$ 1x1 filters, zero padding and stride 1. Then the output of this 1x1 conv layer will have shape $(N, F_1, H , W)$ . So 1x1 conv filters can be used to change the dimensionality in the filter space. If $F_1 > F$ then we are increasing dimensionality, if $F_1 < F$ we are decreasing dimensionality, in the filter dimension. Indeed, in the Google Inception article Going Deeper with Convolutions , they state (bold is mine, not by original authors): One big problem with the above modules, at least in this naive form, is that even a modest number of 5x5 convolutions can be prohibitively expensive on top of a convolutional layer with a large number of filters. This leads to the second idea of the proposed architecture: judiciously applying dimension reductions and projections wherever the computational requirements would increase too much otherwise. This is based on the success of embeddings: even low dimensional embeddings might contain a lot of information about a relatively large image patch... 1x1 convolutions are used to compute reductions before the expensive 3x3 and 5x5 convolutions. Besides being used as reductions, they also include the use of rectified linear activation which makes them dual-purpose. So in the Inception architecture, we use the 1x1 convolutional filters to reduce dimensionality in the filter dimension. As I explained above, these 1x1 conv layers can be used in general to change the filter space dimensionality (either increase or decrease) and in the Inception architecture we see how effective these 1x1 filters can be for dimensionality reduction, explicitly in the filter dimension space, not the spatial dimension space. Perhaps there are other interpretations of 1x1 conv filters, but I prefer this explanation, especially in the context of the Google Inception architecture.
{ "source": [ "https://stats.stackexchange.com/questions/194142", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/102986/" ] }
194,253
I am reading about entropy and am having a hard time conceptualizing what it means in the continuous case. The wiki page states the following: The probability distribution of the events, coupled with the information amount of every event, forms a random variable whose expected value is the average amount of information, or entropy, generated by this distribution. So if I calculate the entropy associated with a probability distribution that is continuous, what is that really telling me? They give an example about flipping coins, so the discrete case, but if there is an intuitive way to explain through an example like that in the continuous case, that would be great! If it helps, the definition of entropy for a continuous random variable $X$ is the following: $$H(X)=-\int P(x)\log_b P(x)dx$$ where $P(x)$ is a probability distribution function. To try and make this more concrete, consider the case of $X\sim \text{Gamma}(\alpha,\beta)$, then, according to Wikipedia , the entropy is \begin{align} H(X)&=\mathbb{E}[-\ln(P(X))]\\ &=\mathbb{E}[-\alpha\ln(\beta)+\ln(\Gamma(\alpha))+\ln(\Gamma(\alpha))-(\alpha-1)\ln(X)+\beta X]\\ &=\alpha-\ln(\beta)+\ln(\Gamma(\alpha))+(1-\alpha)\left(\frac{d}{d\alpha}\ln(\Gamma(\alpha))\right) \end{align} And so now we have calculated the entropy for a continuous distribution (the Gamma distribution) and so if I now evaluate that expression, $H(X)$, given $\alpha$ and $\beta$, what does that quantity actually tell me?
The entropy tells you how much uncertainty is in the system. Let's say you're looking for a cat, and you know that it's somewhere between your house and the neighbors, which is 1 mile away. Your kids tell you that the probability of a cat being on the distance $x$ from your house is described best by beta distribution $f(x;2,2)$. So a cat could be anywhere between 0 and 1, but more likely to be in the middle, i.e. $x_{max}=1/2$. Let's plug the beta distribution into your equation, then you get $H=-0.125$. Next, you ask your wife and she tells you that the best distribution to describe her knowledge of your cat is the uniform distribution. If you plug it to your entropy equation, you get $H=0$. Both uniform and beta distributions let the cat be anywhere between 0 and 1 miles from your house, but there's more uncertainty in the uniform, because your wife has really no clue where the cat is hiding, while kids have some idea , they think it's more likely to be somewhere in the middle. That's why Beta's entropy is lower than Uniform's. You might try other distributions, maybe your neighbor tells you the cat likes to be near either of the houses, so his beta distribution is with $\alpha=\beta=1/2$. Its $H$ must be lower than that of uniform again, because you get some idea about where to look for a cat. Guess whether your neighbor's information entropy is higher or lower than your kids'? I'd bet on kids any day on these matters. UPDATE: How does this work? One way to think of this is to start with a uniform distribution. If you agree that it's the one with the most uncertainty, then think of disturbing it. Let's look at the discrete case for simplicity. Take $\Delta p$ from one point and add it to another like follows: $$p_i'=p-\Delta p$$ $$p_j'=p+\Delta p$$ Now, let's see how the entropy changes: $$H-H'=p_i\ln p_i-p_i\ln (p_i-\Delta p)+p_j\ln p_j-p_j\ln (p_j+\Delta p)$$ $$=p\ln p-p\ln [p(1-\Delta p/p)]+p\ln p-p\ln [p(1+\Delta p/p)]$$ $$=-\ln (1-\Delta p/p)-\ln (1+\Delta p/p)>0$$ This means that any disturbance from the uniform distribution reduces the entropy (uncertainty). To show the same in continuous case, I'd have to use calculus of variations or something along this line, but you'll get the same kind of result, in principle. UPDATE 2: The mean of $n$ uniform random variables is a random variable itself, and it's from Bates distribution . From CLT we know that this new random variable's variance shrinks as $n\to\infty$. So, uncertainty of its location must reduce with increase in $n$: we're more and more certain that a cat's in the middle. My next plot and MATLAB code shows how the entropy decreases from 0 for $n=1$ (uniform distribution) to $n=13$. I'm using distributions31 library here. x = 0:0.01:1; for k=1:5 i = 1 + (k-1)*3; idx(k) = i; f = @(x)bates_pdf(x,i); funb=@(x)f(x).*log(f(x)); fun = @(x)arrayfun(funb,x); h(k) = -integral(fun,0,1); subplot(1,5+1,k) plot(x,arrayfun(f,x)) title(['Bates(x,' num2str(i) ')']) ylim([0 6]) end subplot(1,5+1,5+1) plot(idx,h) title 'Entropy'
{ "source": [ "https://stats.stackexchange.com/questions/194253", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98920/" ] }
195,006
I have come across the Dice coefficient for volume similarity and accuracy . It seems to me that these two measures are the same. Is that correct?
These are not the same thing and they are often used in different contexts. The Dice score is often used to quantify the performance of image segmentation methods. There you annotate some ground truth region in your image and then make an automated algorithm to do it. You validate the algorithm by calculating the Dice score, which is a measure of how similar the objects are. So it is the size of the overlap of the two segmentations divided by the total size of the two objects. Using the same terms as describing accuracy, the Dice score is: $$ \text{Dice score} = \frac{2\cdot \text{number of true positives}}{2 \cdot \text{number of true positives + number of false positives + number of false negatives}} $$ So the number of true positives, is the number that your method finds, the number of positives is the total number of positives that can be found and the number of false positives is the number of points that are negative that your method classifies as positive. The Dice score is not only a measure of how many positives you find, but it also penalizes for the false positives that the method finds, similar to precision. so it is more similar to precision than accuracy. The only difference is the denominator, where you have the total number of positives instead of only the positives that the method finds. So the Dice score is also penalizing for the positives that your algorithm/method could not find. Edit: In the case of image segmentation, let's say that you have a mask with ground truth, let's call the mask $A$ like you suggest. So the mask has values 1 in the pixels where there is something you are trying to find and else zero. Now you have an algorithm to generate image/mask $B$ , which also has to be a binary image, i.e. you create a mask for you segmentation. Then we have the following: Number of positives is the total number of pixels that have intensity 1 in image $A$ Number of true positives is the total number of pixels which have the value 1 in both $A$ and $B$ . So it the intersection of the regions of ones in $A$ and $B$ . It is the same as using the AND operator on $A$ and $B$ . Number of false positives is the number of pixels which appear as 1 in $B$ but zero in $A$ . If you are doing this for a publication, then write Dice with a capital D, because it is named after a guy named Dice. EDIT: Regarding the comment about a correction: I do not use the traditional formula to calculate the Dice coefficient, but if I translate it to the notation in the other answer it becomes: $$ \text{Dice score} = \frac{2\cdot|A\cap B|}{2\cdot|A\cap B| + |B\backslash A| + |A\backslash B|} = \frac{2\cdot|A\cap B|}{|A| + |B|} $$ Which is equivalent to the traditional definition. It is more convenient to write it the way I wrote it originally to state the formula in terms of false positives. The backslash is the set minus.
{ "source": [ "https://stats.stackexchange.com/questions/195006", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/41749/" ] }
195,034
Having recently graduated from my PhD program in statistics, I had for the last couple of months began searching for work in the field of statistics. Almost every company I considered had a job posting with a job title of " Data Scientist ". In fact, it felt like long gone were the days of seeing job titles of Statistical Scientist or Statistician . Had being a data scientist really replaced what being a statistician was or were the titles synonymous I wondered? Well, most of the qualifications for the jobs felt like things that would qualify under the title of statistician. Most jobs wanted a PhD in statistics ($\checkmark$), most required understanding experimental design ($\checkmark$), linear regression and anova ($\checkmark$), generalized linear models ($\checkmark$), and other multivariate methods such as PCA ($\checkmark$), as well as knowledge in a statistical computing environment such as R or SAS ($\checkmark$). Sounds like a data scientist is really just a code name for statistician. However, every interview I went to started with the question: "So are you familiar with machine learning algorithms?" More often than not, I found myself having to try and answer questions about big data, high performance computing, and topics on neural networks, CART, support vector machines, boosting trees, unsupervised models, etc. Sure I convinced myself that these were all statistical questions at heart, but at the end of every interview I couldn't help but leave feeling like I knew less and less about what a data scientist is. I am a statistician, but am I a data scientist? I work on scientific problems so I must be a scientist! And also I work with data, so I must be a data scientist! And according to Wikipedia, most academics would agree with me ( https://en.wikipedia.org/wiki/Data_science , etc. ) Although use of the term "data science" has exploded in business environments, many academics and journalists see no distinction between data science and statistics. But if I am going on all these job interviews for a data scientist position, why does it feel like they are never asking me statistical questions? Well after my last interview I did want any good scientist would do and I sought out data to solve this problem (hey, I am a data scientist after all). However, after many countless Google searches later, I ended up right where I started feeling as if I was once again grappling with the definition of what a data scientist was. I didn't know what a data scientist was exactly since there was so many definitions of it, ( http://blog.udacity.com/2014/11/data-science-job-skills.html , http://www-01.ibm.com/software/data/infosphere/data-scientist/ ) but it seemed like everyone was telling me I wanted to be one: https://hbr.org/2012/10/data-scientist-the-sexiest-job-of-the-21st-century/ http://mashable.com/2014/12/25/data-scientist/#jjgsyhcERZqL etc....the list goes on. Well at the end of the day, what I figured out was "what is a data scientist" is a very hard question to answer. Heck, there were two entire months in Amstat where they devoted time to trying to answer this question: http://magazine.amstat.org/blog/2015/10/01/asa-statement-on-the-role-of-statistics-in-data-science/ http://magazine.amstat.org/blog/2015/11/01/statnews2015/ Well for now, I have to be a sexy statistician to be a data scientist but hopefully the cross validated community might be able to shed some light and help me understand what it means to be a data scientist. Aren't all statisticians data scientists? (Edit/Update) I thought this might spice up the conversation. I just received an email from the American Statistical Association about a job positing with Microsoft looking for a Data Scientist. Here is the link: Data Scientist Position . I think this is interesting because the role of the position hits on a lot of specific traits we have been talking about, but I think lots of them require a very rigorous background in statistics, as well as contradicting many of the answers posted below. In case the link goes dead, here are the qualities Microsoft seeks in a data scientist: Core Job Requirements and Skills: Business Domain Experience using Analytics Must have experience across several relevant business domains in the utilization of critical thinking skills to conceptualize complex business problems and their solutions using advanced analytics in large scale real-world business data sets The candidate must be able to independently run analytic projects and help our internal clients understand the findings and translate them into action to benefit their business. Predictive Modeling Experience across industries in predictive modeling Business problem definition and conceptual modeling with the client to elicit important relationships and to define the system scope Statistics/Econometrics Exploratory data analytics for continuous and categorical data Specification and estimation of structural model equations for enterprise and consumer behavior, production cost, factor demand, discrete choice, and other technology relationships as needed Advanced statistical techniques to analyze continuous and categorical data Time series analysis and implementation of forecasting models Knowledge and experience in working with multiple variables problems Ability to assess model correctness and conduct diagnostic tests Capability to interpret statistics or economic models Knowledge and experience in building discrete event simulation, and dynamic simulation models Data Management Familiarity with use of T-SQL and analytics for data transformation and the application of exploratory data analysis techniques for very large real-world data sets Attention to data integrity including data redundancy, data accuracy, abnormal or extreme values, data interactions and missing values. Communication and Collaboration Skills Work independently and able to work with a virtual project team that will research innovative solutions to challenging business problems Collaborate with partners, apply critical thinking skills, and drive analytic projects end-to-end Superior communication skills, both verbal and written Visualization of analytic results in a form that is consumable by a diverse set of stakeholders Software Packages Advanced Statistical/Econometric software packages: Python, R, JMP, SAS, Eviews, SAS Enterprise Miner Data exploration, visualization, and management: T-SQL, Excel, PowerBI, and equivalent tools Qualifications: Minimum 5+ years of related experience required Post graduate degree in quantitative field is desirable.
People define Data Science differently, but I think that the common part is: practical knowledge how to deal with data, practical programming skills. Contrary to its name, it's rarely "science". That is, in data science the emphasis is on practical results (like in engineering), not proofs, mathematical purity or rigor characteristic to academic science. Things need to work, and there is little difference if it is based on an academic paper, usage of an existing library, your own code or an impromptu hack. Statistician is not necessary a programmer (may use pen & paper and a dedicated software). Also, some job calls in data science have nothing to do with statistics. E.g. it's data engineering like processing big data, even if the most advanced maths there may be calculating average (personally I wouldn't call this activity "data science", though). Moreover, "data science" is hyped, so tangentially related jobs use this title - to lure the applicants or raise ego of the current workers. I like the taxonomy from Michael Hochster's answer on Quora : Type A Data Scientist: The A is for Analysis. This type is primarily concerned with making sense of data or working with it in a fairly static way. The Type A Data Scientist is very similar to a statistician (and may be one) but knows all the practical details of working with data that aren’t taught in the statistics curriculum: data cleaning, methods for dealing with very large data sets, visualization, deep knowledge of a particular domain, writing well about data, and so on. Type B Data Scientist: The B is for Building. Type B Data Scientists share some statistical background with Type A, but they are also very strong coders and may be trained software engineers. The Type B Data Scientist is mainly interested in using data β€œin production.” They build models which interact with users, often serving recommendations (products, people you may know, ads, movies, search results). In that sense, Type A Data Scientist is a statistician who can program. But, even for quantitive part, there may be people with background more in computer science (e.g. machine learning) than regular statistics, or ones focusing e.g. on data visualization. And The Data Science Venn Diagram (here: hacking ~ programming): see also alternative Venn diagrams ( this and that ). Or even a tweet , while humorous, showing a balanced list of typical skills and activities of a data scientist: See also this post: Data scientist - statistician, programmer, consultant and visualizer? .
{ "source": [ "https://stats.stackexchange.com/questions/195034", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98920/" ] }
195,446
I am performing hierarchical clustering on data I've gathered and processed from the reddit data dump on Google BigQuery. My process is the following: Get the latest 1000 posts in /r/politics Gather all the comments Process the data and compute an n x m data matrix (n:users/samples, m:posts/features) Calculate the distance matrix for hierarchical clustering Choose a linkage method and perform the hierarchical clustering Plot the data as a dendrogram My question is, how do I determine what the best linkage method is? I'm currently using Ward but how do I know if I should be using single , complete , average , etc? I'm very new to this stuff but I can't find a clear answer online as I'm not sure there is one. So what might be a good idea for my application? Note the data is relatively sparse in the sense that the n x m matrix has a lot of zeroes (most people don't comment on more than a few posts).
Methods overview Short reference about some linkage methods of hierarchical agglomerative cluster analysis (HAC). Basic version of HAC algorithm is one generic; it amounts to updating, at each step, by the formula known as Lance-Williams formula, the proximities between the emergent (merged of two) cluster and all the other clusters (including singleton objects) existing so far. There exist implementations not using Lance-Williams formula. But using it is convenient: it lets one code various linkage methods by the same template. The recurrence formula includes several parameters (alpha, beta, gamma). Depending on the linkage method, the parameters are set differently and so the unwrapped formula obtains a specific view. Many texts on HAC show the formula, its method-specific views and explain the methods. I would recommend articles by Janos Podani as very thorough. The room and need for the different methods arise from the fact that a proximity (distance or similarity) between two clusters or between a cluster and a singleton object could be formulated in many various ways. HAC merges at each step two most close clusters or points, but how to compute the aforesaid proximity in the face that the input proximity matrix was defined between singleton objects only, is the problem to formulate. So, the methods differ in respect to how they define proximity between any two clusters at every step. "Colligation coefficient" (output in agglomeration schedule/history and forming the "Y" axis on a dendrogram) is just the proximity between the two clusters merged at a given step. Method of single linkage or nearest neighbour . Proximity between two clusters is the proximity between their two closest objects. This value is one of values of the input matrix. The conceptual metaphor of this build of cluster, its archetype, is spectrum or chain . Chains could be straight or curvilinear, or could be like "snowflake" or "amoeba" view. Two most dissimilar cluster members can happen to be very much dissimilar in comparison to two most similar. Single linkage method controls only nearest neighbours similarity. Method of complete linkage or farthest neighbour . Proximity between two clusters is the proximity between their two most distant objects. This value is one of values of the input matrix. The metaphor of this build of cluster is circle (in the sense, by hobby or plot) where two most distant from each other members cannot be much more dissimilar than other quite dissimilar pairs (as in circle). Such clusters are "compact" contours by their borders, but they are not necessarily compact inside. Method of between-group average linkage (UPGMA). Proximity between two clusters is the arithmetic mean of all the proximities between the objects of one, on one side, and the objects of the other, on the other side. The metaphor of this build of cluster is quite generic, just united class or close-knit collective; and the method is frequently set the default one in hierarhical clustering packages. Clusters of miscellaneous shapes and outlines can be produced. Simple average , or method of equilibrious between-group average linkage (WPGMA) is the modified previous. Proximity between two clusters is the arithmetic mean of all the proximities between the objects of one, on one side, and the objects of the other, on the other side; while the subclusters of which each of these two clusters were merged recently have equalized influence on that proximity – even if the subclusters differed in the number of objects. Method of within-group average linkage (MNDIS). Proximity between two clusters is the arithmetic mean of all the proximities in their joint cluster. This method is an alternative to UPGMA. It usually will lose to it in terms of cluster density, but sometimes will uncover cluster shapes which UPGMA will not. Centroid method (UPGMC). Proximity between two clusters is the proximity between their geometric centroids: [squared] euclidean distance between those. The metaphor of this build of cluster is proximity of platforms (politics). Like in political parties, such clusters can have fractions or "factions", but unless their central figures are apart from each other the union is consistent. Clusters can be various by outline. Median , or equilibrious centroid method (WPGMC) is the modified previous. Proximity between two clusters is the proximity between their geometric centroids ([squared] euclidean distance between those); while the centroids are defined so that the subclusters of which each of these two clusters were merged recently have equalized influence on its centroid – even if the subclusters differed in the number of objects. Name "median" is partly misleading because the method doesn't use medians of data distributions, it is still based on centroids (the means). Ward’s method, or minimal increase of sum-of-squares (MISSQ), sometimes incorrectly called "minimum variance" method. Proximity between two clusters is the magnitude by which the summed square in their joint cluster will be greater than the combined summed square in these two clusters: $SS_{12}-(SS_1+SS_2)$ . (Between two singleton objects this quantity = squared euclidean distance / $2$ .) The metaphor of this build of cluster is type . Intuitively, a type is a cloud more dense and more concentric towards its middle, whereas marginal points are few and could be scattered relatively freely. Some among less well-known methods (see Podany J. New combinatorial clustering methods // Vegetatio, 1989, 81: 61-77.) [also implemented by me as a SPSS macro found on my web-page]: Method of minimal sum-of-squares (MNSSQ). Proximity between two clusters is the summed square in their joint cluster: $SS_{12}$ . (Between two singleton objects this quantity = squared euclidean distance / $2$ .) Method of minimal increase of variance (MIVAR). Proximity between two clusters is the magnitude by which the mean square in their joint cluster will be greater than the weightedly (by the number of objects) averaged mean square in these two clusters: $MS_{12}-(n_1MS_1+n_2MS_2)/(n_1+n_2) = [SS_{12}-(SS_1+SS_2)]/(n_1+n_2)$ . (Between two singleton objects this quantity = squared euclidean distance / $4$ .) Method of minimal variance (MNVAR). Proximity between two clusters is the mean square in their joint cluster: $MS_{12} = SS_{12}/(n_1+n_2)$ . (Between two singleton objects this quantity = squared euclidean distance / $4$ .). Still other methods represent some specialized set distances . HAC algorithm can be based on them, only not on the generic Lance-Williams formula; such distances include, among other: Hausdorff distance and Point-centroid cross-distance (I've implemented a HAC program for SPSS based on those.) First 5 methods described permit any proximity measures (any similarities or distances) and results will, naturally, depend on the measure chosen. Next 6 methods described require distances; and fully correct will be to use only squared euclidean distances with them, because these methods compute centroids in euclidean space. Therefore distances should be euclidean for the sake of geometric correctness (these 6 methods are called together geometric linkage methods). At worst case, you might input other metric distances at admitting more heuristic, less rigorous analysis. Now about that "squared". Computation of centroids and deviations from them are most convenient mathematically/programmically to perform on squared distances, that's why HAC packages usually require to input and are tuned to process the squared ones. However, there exist implementations - fully equivalent yet a bit slower - based on nonsquared distances input and requiring those; see for example "Ward-2" implementation for Ward's method. You should consult with the documentation of you clustering program to know which - squared or not - distances it expects at input to a "geometric method" in order to do it right. Methods MNDIS, MNSSQ, and MNVAR require on steps, in addition to just update the Lance-Williams formula, to store a within-cluster statistic (which depends on the method). Methods which are most frequently used in studies where clusters are expected to be solid more or less round clouds, - are methods of average linkage, complete linkage method, and Ward's method. Ward's method is the closest, by it properties and efficiency, to K-means clustering; they share the same objective function - minimization of the pooled within-cluster SS "in the end". Of course, K-means (being iterative and if provided with decent initial centroids) is usually a better minimizer of it than Ward. However, Ward seems to me a bit more accurate than K-means in uncovering clusters of uneven physical sizes (variances) or clusters thrown about space very irregularly. MIVAR method is weird to me, I can't imagine when it could be recommended, it doesn't produce dense enough clusters. Methods centroid, median, minimal increase of variance – may give sometimes the so-called reversals : a phenomenon when the two clusters being merged at some step appear closer to each other than pairs of clusters merged earlier. That is because these methods do not belong to the so called ultrametric. This situation is inconvenient but is theoretically OK. Methods of single linkage and centroid belong to so called space contracting , or β€œchaining”. That means - roughly speaking - that they tend to attach objects one by one to clusters, and so they demonstrate relatively smooth growth of curve β€œ% of clustered objects”. On the contrary, methods of complete linkage, Ward’s, sum-of-squares, increase of variance, and variance commonly get considerable share of objects clustered even on early steps, and then proceed merging yet those – therefore their curve β€œ% of clustered objects” is steep from the first steps. These methods are called space dilating . Other methods fall in-between. Flexible versions . By adding the additional parameter into the Lance-Willians formula it is possible to make a method become specifically self-tuning on its steps. The parameter brings in correction for the being computed between-cluster proximity, which depends on the size (amount of de-compactness) of the clusters. The meaning of the parameter is that it makes the method of agglomeration more space dilating or space contracting than the standard method is doomed to be. Most well-known implementation of the flexibility so far is to average linkage methods UPGMA and WPGMA (Belbin, L. et al. A Comparison of Two Approaches to Beta-Flexible Clustering // Multivariate Behavioral Research, 1992, 27, 417–433.). Dendrogram. On a dendrogram "Y" axis, typically displayed is the proximity between the merging clusters - as was defined by methods above. Therefore, for example, in centroid method the squared distance is typically gauged (ultimately, it depends on the package and it options) - some researchers are not aware of that. Also, by tradition, with methods based on increment of nondensity, such as Ward’s, usually shown on the dendrogram is cumulative value - it is sooner for convenience reasons than theoretical ones. Thus, (in many packages) the plotted coefficient in Ward’s method represents the overall, across all clusters, within-cluster sum-of-squares observed at the moment of a given step. Don't miss to read the documentation of your package to find out in which form the particular program displays colligation coefficient (cluster distance) on its dendrogram. One should refrain from judging which linkage method is "better" for his data by comparing the looks of the dendrograms: not only because the looks change when you change what modification of the coefficient you plot there - as it was just described, - but because the look will differ even on the data with no clusters. To choose the "right" method There is no single criterion. Some guidelines how to go about selecting a method of cluster analysis (including a linkage method in HAC as a particular case) are outlined in this answer and the whole thread therein.
{ "source": [ "https://stats.stackexchange.com/questions/195446", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/104058/" ] }
195,456
One of the biggest issue with cluster analysis is that we may happen to have to derive different conclusion when base on different clustering methods used (including different linkage methods in hierarchical clustering). I would like to know your opinion on this - which method will you select, and how. One might say "the best method of clustering is which gives you the right answer"; but I may question in response that cluster analysis is supposed to be an unsupervised technique - so how do I know which method or linkage is the right answer? In general: is a clustering alone robust enough to rely on? Or we need a second method and get a shared result to be based on both? My question is not only about possible ways to validate / evaluate clustering performance, but is broader - on what basis do we select/prefer one clustering method/algorithm over another one. Also, is there common warnings that we should look around when we are selecting a method to cluster our data? I know that it is very general question and very difficult to answer. I only would like to know if you have any comment or any advise or any suggestion for me to learn more about this.
Often they say that there is no other analytical technique as strongly of the "as you sow you shall mow" kind, as cluster analysis is. I can imagine of a number dimensions or aspects of "rightness" of this or that clustering method : Cluster metaphor . "I preferred this method because it constitutes clusters such (or such a way) which meets with my concept of a cluster in my particular project" . Each clustering algorithm or subalgorithm/method implies its corresponding structure/build/shape of a cluster. In regard to hierarchical methods, I've observed this in one of points here , and also here . I.e. some methods give clusters that are prototypically "types", other give "circles [by interest]", still other "[political] platforms", "classes", "chains", etc. Select that method which cluster metaphor suits you. For example, if I see my customer segments as types - more or less spherical shapes with compaction(s) in the middle I'll choose Ward's linkage method or K-means, but never single linkage method, clearly. If I need a focal representative point I could use medoid method. If I need to screen points for them being core and peripheral representatives I could use DBSCAN approach. Data/method assumptions . "I preferred this method because my data nature or format predispose to it" . This important and vast point is also mentioned in my link above. Different algorithms/methods may require different kind of data for them or different proximity measure to be applied to the data, and vice versa, different data may require different methods. There are methods for quantitative and methods for qualitative data. Mixture quantitative + qualitative features dramatically narrows the scope of choice among methods. Ward's or K-means are based - explicitly or implicitly - on (squared) euclidean distance proximity measure only and not on arbitrary measure. Binary data may call for special similarity measures which in turn will strongly question using some methods, for example Ward's or K-means, for them. Big data may need special algorithms or special implementations. Internal validity . "I preferred this method because it gave me most clear-cut, tight-and-isolated clusters" . Choose algorithm/method that shows the best results for your data from this point of view. The more tight, dense are clusters inside and the less density is outside of them (or the wider apart are the clusters) - the greater is the internal validity. Select and use appropriate internal clustering criteria ( which are plenty - Calinski-Harabasz, Silhouette, etc etc; sometimes also called "stopping rules") to assess it. [Beware of overfitting: all clustering methods seek to maximize some version of internal validity $^1$ (it's what clustering is about), so high validity may be partly due to random peculiarity of the given dataset; having a test dataset is always beneficial.] External validity . "I preferred this method because it gave me clusters which differ by their background or clusters which match with the true ones I know" . If a clustering partition presents clusters which are clearly different on some important background (i.e. not participated in the cluster analysis) characteristics then it is an asset for that method which produced the partition. Use any analysis which applies to check the difference; there also exist a number of useful external clustering criteria (Rand, F-measure, etc etc). Another variant of external validation case is when you somehow know the true clusters in your data (know "ground truth"), such as when you generated the clusters yourself. Then how accurately your clustering method is able to uncover the real clusters is the measure of external validity. Cross-validity . "I preferred this method because it is giving me very similar clusters on equivalent samples of the data or extrapolates well onto such samples" . There are various approaches and their hybrids, some more feasible with some clustering methods while others with other methods. Two main approaches are stability check and generalizability check. Checking stability of a clustering method, one randomly splits or resamples the data in partly intersecting or fully disjoint sets and does the clustering on each; then matches and compares the solutions wrt some emergent cluster characteristic (for example, a cluster's central tendency location) whether it is stable across the sets. Checking generalizability implies doing clustering on a train set and then using its emergent cluster characteristic or rule to assign objects of a test set, plus also doing clustering on the test set. The assignment result's and the clustering result's cluster memberships of the test set objects are compared then. Interpretation . "I preferred this method because it gave me clusters which, explained, are most persuasive that there is meaning in the world" . It's not statistical - it is your psychological validation. How meaningful are the results for you, the domain and, possibly audience/client. Choose method giving most interpretable, spicy results. Gregariousness . Some researches regularly and all researches occasionally would say "I preferred this method because it gave with my data similar results with a number of other methods among all those I probed" . This is a heuristic but questionable strategy which assumes that there exist quite universal data or quite universal method. Points 1 and 2 are theoretical and precede obtaining the result; exclusive relying on these points is the haughty, self-assured exploratory strategy. Points 3, 4 and 5 are empirical and follow the result; exclusive relying on these points is the fidgety, try-all-out exploratory strategy. Point 6 is creative which means that it denies any result in order to try to rejustify it. Point 7 is loyal mauvaise foi. Points 3 through 7 can also be judges in your selection of the "best" number of clusters . $^1$ A concrete internal clustering criterion is itself not "orthogonal to" a clustering method (nor to the data kind). This raises a philosophical question to what extent such a biased or prejudiced criterion can be of utility (see answers just noticing it).
{ "source": [ "https://stats.stackexchange.com/questions/195456", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/69143/" ] }
196,576
Suppose we have a random variable $X \sim f(x|\theta)$ . If $\theta_0$ were the true parameter, the the likelihood function should be maximized and the derivative equal to zero. This is the basic principle behind the maximum likelihood estimator. As I understand it, Fisher information is defined as $$I(\theta) = \Bbb E \Bigg[\left(\frac{\partial}{\partial \theta}f(X|\theta)\right)^2\Bigg ]$$ Thus, if $\theta_0$ is the true parameter, $I(\theta) = 0$ . But if it $\theta_0$ is not the true parameter, then we will have a larger amount of Fisher information. my questions Does Fisher information measure the "error" of a given MLE? In other words, doesn't the existence of positive Fisher information imply my MLE can't be ideal? How does this definition of "information" differ from that used by Shannon? Why do we call it information?
Trying to complement the other answers... What kind of information is Fisher information? Start with the loglikelihood function $$ \ell (\theta) = \log f(x;\theta) $$ as a function of $\theta$ for $\theta \in \Theta$ , the parameter space. Assuming some regularity conditions we do not discuss here, we have $\DeclareMathOperator{\E}{\mathbb{E}} \E \frac{\partial}{\partial \theta} \ell (\theta) = \E_\theta \dot{\ell}(\theta) = 0$ (we will write derivatives with respect to the parameter as dots as here). The variance is the Fisher information $$ I(\theta) = \E_\theta ( \dot{\ell}(\theta) )^2= -\E_\theta \ddot{\ell}(\theta) $$ the last formula showing that it is the (negative) curvature of the loglikelihood function. One often finds the maximum likelihood estimator (mle) of $\theta$ by solving the likelihood equation $\dot{\ell}(\theta)=0$ when the Fisher information as the variance of the score $\dot{\ell}(\theta)$ is large, then the solution to that equation will be very sensitive to the data, giving a hope for high precision of the mle. That is confirmed at least asymptotically, the asymptotic variance of the mle being the inverse of Fisher information. How can we interpret this? $\ell(\theta)$ is the likelihood information about the parameter $\theta$ from the sample. This can really only be interpreted in a relative sense, like when we use it to compare the plausibilities of two distinct possible parameter values via the likelihood ratio test $\ell(\theta_0) - \ell(\theta_1)$ . The rate of change of the loglikelihood is the score function $\dot{\ell}(\theta)$ tells us how fast the likelihood changes, and its variance $I(\theta)$ how much this varies from sample to sample, at a given parameter value, say $\theta_0$ . The equation (which is really surprising!) $$ I(\theta) = - \E_\theta \ddot{\ell}(\theta) $$ tells us there is a relationship (equality) between the variability in the information (likelihood) for a given parameter value, $\theta_0$ , and the curvature of the likelihood function for that parameter value. This is a surprising relationship between the variability (variance) of ths statistic $\dot{\ell}(\theta) \mid_{\theta=\theta_0}$ and the expected change in likelihood when we vary the parameter $\theta$ in some interval around $\theta_0$ (for the same data). This is really both strange, surprising and powerful! So what is the likelihood function? We usually think of the statistical model $\{ f(x;\theta), \theta \in \Theta \} $ as a family of probability distributions for data $x$ , indexed by the parameter $\theta$ some element in the parameter space $\Theta$ . We think of this model as being true if there exists some value $\theta_0 \in \Theta$ such that the data $x$ actually have the probability distribution $f(x;\theta_0)$ . So we get a statistical model by imbedding the true data-generating probability distribution $f(x;\theta_0)$ in a family of probability distributions. But, it is clear that such an imbedding can be done in many different ways, and each such imbedding will be a "true" model, and they will give different likelihood functions. And, without such an imbedding, there is no likelihood function. It seems that we really do need some help, some principles for how to choose an imbedding wisely! So, what does this mean? It means that the choice of likelihood function tells us how we would expect the data to change, if the truth changed a little bit. But, this cannot really be verified by the data, as the data only gives information about the true model function $f(x;\theta_0)$ which actually generated the data, and not nothing about all the other elements in the choosen model. This way we see that choice of the likelihood function is similar to choice of a prior in Bayesian analysis, it injects non-data information into the analysis. Let us look at this in a simple (somewhat artificial) example, and look at the effect of imbedding $f(x;\theta_0)$ in a model in different ways. Let us assume that $X_1, \dotsc, X_n$ are iid as $N(\mu=10, \sigma^2=1)$ . So, that is the true, data-generating distribution. Now, let us embed this in a model in two different ways, model A and model B. $$ A \colon X_1, \dotsc, X_n ~\text{iid}~N(\mu, \sigma^2=1),\mu \in \mathbb{R} \\ B \colon X_1, \dotsc, X_n ~\text{iid}~N(\mu, \mu/10), \mu>0 $$ you can check that this coincides for $\mu=10$ . The loglikelihood functions become $$ \ell_A(\mu) = -\frac{n}{2} \log (2\pi) -\frac12\sum_i (x_i-\mu)^2 \\ \ell_B(\mu) = -\frac{n}{2} \log (2\pi) - \frac{n}{2}\log(\mu/10) - \frac{10}{2}\sum_i \frac{(x_i-\mu)^2}{\mu} $$ The score functions: (loglikelihood derivatives): $$ \dot{\ell}_A(\mu) = n (\bar{x}-\mu) \\ \dot{\ell}_B(\mu) = -\frac{n}{2\mu}- \frac{10}{2}\sum_i \left(\frac{x_i}{\mu}\right)^2 - 15 n $$ and the curvatures $$ \ddot{\ell}_A(\mu) = -n \\ \ddot{\ell}_B(\mu) = \frac{n}{2\mu^2} + \frac{10}{2}\sum_i \frac{2 x_i^2}{\mu^3} $$ so, the Fisher information do really depend on the imbedding. Now, we calculate the Fisher information at the true value $\mu=10$ , $$ I_A(\mu=10) = n, \\ I_B(\mu=10) = n \cdot \left(\frac1{200}+\frac{2020}{2000}\right) > n $$ so the Fisher information about the parameter is somewhat larger in model B. This illustrates that, in some sense, the Fisher information tells us how fast the information from the data about the parameter would have changed if the governing parameter changed in the way postulated by the imbedding in a model family . The explanation of higher information in model B is that our model family B postulates that if the expectation would have increased, then the variance too would have increased . So that, under model B, the sample variance will also carry information about $\mu$ , which it will not do under model A. Also, this example illustrates that we really do need some theory for helping us in how to construct model families.
{ "source": [ "https://stats.stackexchange.com/questions/196576", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68473/" ] }
196,601
In classical statistics, there is a definition that a statistic $T$ of a set of data $y_1, \ldots, y_n$ is defined to be complete for a parameter $\theta$ it is impossible to form an unbiased estimator of $0$ from it nontrivially. That is, the only way to have $E h(T (y )) = 0$ for all $\theta$ is to have $h$ be $0$ almost surely. Is there a intuition behind this? Is seems like a rather mechanical way of defining this, I am aware this has been asked before, but was wondering if there was a very easy to understand intuition that would make introductory students have an easier time digesting the material.
I will try to add to the other answer. First, completeness is a technical condition which is justified mainly by the theorems that use it. So let us start with some related concepts and theorems where they occur. Let $X=(X_1,X_2,\dotsc,X_n)$ represent a vector of iid data, which we model as having a distribution $f(x;\theta), \theta \in \Theta$ where the parameter $\theta$ governing the data is unknown. $T=T(X)$ is sufficient if the conditional distribution of $X \mid T$ does not depend on the parameter $\theta$ . $V=V(X)$ is ancillary if the distribution of $V$ does not depend on $\theta$ (within the family $f(x;\theta)$ ). $U=U(X)$ is an unbiased estimator of zero if its expectation is zero, irrespective of $\theta$ . $S=S(X)$ is a complete statistic if any unbiased estimator of zero based on $S$ is identically zero, that is, if $\DeclareMathOperator{\E}{\mathbb{E}} \E g(S)=0 (\text{for all $\theta$})$ then $g(S)=0$ a.e. (for all $\theta$ ). Now, suppose you have two different unbiased estimators of $\theta$ based on the sufficient statistic $T$ , $g_1(T), g_2(T)$ . That is, in symbols $$ \E g_1(T)=\theta ,\\ \E g_2(T)=\theta $$ and $\DeclareMathOperator{\P}{\mathbb{P}} \P(g_1(T) \not= g_2(T) ) > 0$ (for all $\theta$ ). Then $g_1(T)-g_2(T)$ is an unbiased estimator of zero, which is not identically zero, proving that $T$ is not complete. So, completeness of an sufficient statistic $T$ gives us that there exists only one unique unbiased estimator of $\theta$ based on $T$ . That is already very close to the Lehmann–ScheffΓ© theorem. Let us look at some examples. Suppose $X_1, \dotsc, X_n$ now are iid uniform on the interval $(\theta, \theta+1)$ . We can show that ( $X_{(1)} < X_{(2)} < \dotsm < X_{(n)}$ are the order statistics) the pair $(X_{(1)}, X_{(n)})$ is sufficient, but it is not complete, because the difference $X_{(n)}-X_{(1)}$ is ancillary; we can compute its expectation, let it be $c$ (which is a function of $n$ only), and then $X_{(n)}-X_{(1)} -c$ will be an unbiased estimator of zero which is not identically zero. So our sufficient statistic, in this case, is not complete and sufficient. And we can see what that means: there exist functions of the sufficient statistic which are not informative about $\theta$ (in the context of the model). This cannot happen with a complete sufficient statistic; it is in a sense maximally informative, in that no functions of it are uninformative. On the other hand, if there is some function of the minimally sufficient statistic that has expectation zero, that could be seen as a noise term ; disturbance/noise terms in models have expectation zero. So we could say that non-complete sufficient statistics do contain some noise . Look again at the range $R=X_{(n)}-X_{(1)}$ in this example. Since its distribution does not depend on $\theta$ , it doesn't by itself alone contain any information about $\theta$ . But, together with the sufficient statistic, it does! How? Look at the case where $R=1$ is observed.Then, in the context of our (known to be true) model, we have perfect knowledge of $\theta$ ! Namely, we can say with certainty that $\theta = X_{(1)}$ . You can check that any other value for $\theta$ then leads to either $X_{(1)}$ or $X_{(n)}$ being an impossible observation, under the assumed model. On the other hand, if we observe $R=0.1$ , then the range of possible values for $\theta$ is rather large (exercise ...). In this sense, the ancillary statistic $R$ does contain some information about the precision with which we can estimate $\theta$ based on this data and model. In this example, and others, the ancillary statistic $R$ "takes over the role of the sample size". Usually, confidence intervals and such need the sample size $n$ , but in this example, we can make a conditional confidence interval this is computed using only $R$ , not $n$ (exercise.) This was an idea of Fisher, that inference should be conditional on some ancillary statistic. Now, Basu's theorem: If $T$ is complete sufficient, then it is independent of any ancillary statistic. That is, inference based on a complete sufficient statistic is simpler, in that we do not need to consider conditional inference. Conditioning on a statistic which is independent of $T$ does not change anything, of course. Then, a last example to give some more intuition. Change our uniform distribution example to a uniform distribution on the interval $(\theta_1, \theta_2)$ (with $\theta_1<\theta_2$ ). In this case the statistic $(X_{(1)}, X_{(n)})$ is complete and sufficient. What changed? We can see that completeness is really a property of the model . In the former case, we had a restricted parameter space. This restriction destroyed completeness by introducing relationships on the order statistics. By removing this restriction we got completeness! So, in a sense, lack of completeness means that the parameter space is not big enough, and by enlarging it we can hope to restore completeness (and thus, easier inference). Some other examples where lack of completeness is caused by restrictions on the parameter space, see my answer to: What kind of information is Fisher information? Let $X_1, \dotsc, X_n$ be iid $\mathcal{Cauchy}(\theta,\sigma)$ (a location-scale model). Then the order statistics are sufficient but not complete. But now enlarge this model to a fully nonparametric model, still iid but from some completely unspecified distribution $F$ . Then the order statistics are sufficient and complete. For exponential families with canonical parameter space (that is, as large as possible) the minimal sufficient statistic is also complete. But in many cases, introducing restrictions on the parameter space, as with curved exponential families , destroys completeness. A very relevant paper is Lehmann (1981), J. Am. Stat. Assoc. , 76 , 374, "An Interpretation of Completeness and Basu's Theorem".
{ "source": [ "https://stats.stackexchange.com/questions/196601", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/53410/" ] }
197,509
I have some data which I would like to smooth so that the smoothed points are monotonically decreasing. My data sharply decreases and then begins to plateau. Here's an example using R df <- data.frame(x=1:10, y=c(100,41,22,10,6,7,2,1,3,1)) ggplot(df, aes(x=x, y=y))+geom_line() What's a good smoothing technique I could use? Also, it'd be nice if I can force the 1st smoothed point to be close to my observed point.
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't as user friendly as gam() , but the steps are shown below, based mostly on the example from ?pcls , modified to suit the sample data you gave: df <- data.frame(x=1:10, y=c(100,41,22,10,6,7,2,1,3,1)) ## Set up the size of the basis functions/number of knots k <- 5 ## This fits the unconstrained model but gets us smoothness parameters that ## that we will need later unc <- gam(y ~ s(x, k = k, bs = "cr"), data = df) ## This creates the cubic spline basis functions of `x` ## It returns an object containing the penalty matrix for the spline ## among other things; see ?smooth.construct for description of each ## element in the returned object sm <- smoothCon(s(x, k = k, bs = "cr"), df, knots = NULL)[[1]] ## This gets the constraint matrix and constraint vector that imposes ## linear constraints to enforce montonicity on a cubic regression spline ## the key thing you need to change is `up`. ## `up = TRUE` == increasing function ## `up = FALSE` == decreasing function (as per your example) ## `xp` is a vector of knot locations that we get back from smoothCon F <- mono.con(sm$xp, up = FALSE) # get constraints: up = FALSE == Decreasing constraint! Now we need to fill in the object that gets passed to pcls() containing details of the penalised constrained model we want to fit ## Fill in G, the object pcsl needs to fit; this is just what `pcls` says it needs: ## X is the model matrix (of the basis functions) ## C is the identifiability constraints - no constraints needed here ## for the single smooth ## sp are the smoothness parameters from the unconstrained GAM ## p/xp are the knot locations again, but negated for a decreasing function ## y is the response data ## w are weights and this is fancy code for a vector of 1s of length(y) G <- list(X = sm$X, C = matrix(0,0,0), sp = unc$sp, p = -sm$xp, # note the - here! This is for decreasing fits! y = df$y, w = df$y*0+1) G$Ain <- F$A # the monotonicity constraint matrix G$bin <- F$b # the monotonicity constraint vector, both from mono.con G$S <- sm$S # the penalty matrix for the cubic spline G$off <- 0 # location of offsets in the penalty matrix Now we can finally do the fitting ## Do the constrained fit p <- pcls(G) # fit spline (using s.p. from unconstrained fit) p contains a vector of coefficients for the basis functions corresponding to the spline. To visualize the fitted spline, we can predict from the model at 100 locations over the range of x. We do 100 values so as to get a nice smooth line on the plot. ## predict at 100 locations over range of x - get a smooth line on the plot newx <- with(df, data.frame(x = seq(min(x), max(x), length = 100))) To generate predicted values we use Predict.matrix() , which generates a matrix such that when multiple by coefficients p yields predicted values from the fitted model: fv <- Predict.matrix(sm, newx) %*% p newx <- transform(newx, yhat = fv[,1]) plot(y ~ x, data = df, pch = 16) lines(yhat ~ x, data = newx, col = "red") This produces: I'll leave it up to you to get the data into a tidy form for plotting with ggplot ... You can force a closer fit (to partially answer your question about having the smoother fit the first data point) by increasing the dimension of the basis function of x . For example, setting k equal to 8 ( k <- 8 ) and rerunning the code above we get You can't push k much higher for these data, and you have to be careful about over fitting; all pcls() is doing is solving the penalised least squares problem given the constraints and the supplied basis functions, it's not performing smoothness selection for you - not that I know of...) If you want interpolation, then see the base R function ?splinefun which has Hermite splines and cubic splines with monotonicty constraints. In this case you can't use this however as the data are not strictly monotonic.
{ "source": [ "https://stats.stackexchange.com/questions/197509", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/31542/" ] }
197,517
I am using sklearn.neighbors.KNeighborsRegressor() to train over a set of very high dimensional input (721 dimensions) and continous output (a regression problem). My sample size is 2500. I heard of the curse of dimensionality so I tried implementing PCA to reduce the amount of features. The image below shows the $R^2$ score vs # neighbors for different number of features taken. By the way, these $R^2$ scores are the cross-validation scores reported by GridSearchCV() and not an out-of-sample score, but they should be similar I guess. The little triangle marks indicate the optimal scores. Now, I would have expected that lowering the amount of features would get me better scores, as it would help with the curse of dimensionality, but this is not the case. In fact, the more components you take, the better $R^2$ score I get. To be completely honest, the GridSearchCV() reported that the best parameters were n_components = 13, n_neighbors = 7 , but as you can see in the picture, it might have been just a numerical luck. According to the plot, the performances I get are pretty much the same after 7 features. I would have expected the curves to be higher until some point, and then be lower due to the curse of dimensionality. Why is it not the case? EDIT: Added a simpler plot of the cross-validation score vs PCA components. As mentioned, after some 10 components, the best scores are pretty much constant. EDIT2: Added scree plot.
You can do this using penalised splines with monotonicity constraints via the mono.con() and pcls() functions in the mgcv package. There's a little fiddling about to do because these functions aren't as user friendly as gam() , but the steps are shown below, based mostly on the example from ?pcls , modified to suit the sample data you gave: df <- data.frame(x=1:10, y=c(100,41,22,10,6,7,2,1,3,1)) ## Set up the size of the basis functions/number of knots k <- 5 ## This fits the unconstrained model but gets us smoothness parameters that ## that we will need later unc <- gam(y ~ s(x, k = k, bs = "cr"), data = df) ## This creates the cubic spline basis functions of `x` ## It returns an object containing the penalty matrix for the spline ## among other things; see ?smooth.construct for description of each ## element in the returned object sm <- smoothCon(s(x, k = k, bs = "cr"), df, knots = NULL)[[1]] ## This gets the constraint matrix and constraint vector that imposes ## linear constraints to enforce montonicity on a cubic regression spline ## the key thing you need to change is `up`. ## `up = TRUE` == increasing function ## `up = FALSE` == decreasing function (as per your example) ## `xp` is a vector of knot locations that we get back from smoothCon F <- mono.con(sm$xp, up = FALSE) # get constraints: up = FALSE == Decreasing constraint! Now we need to fill in the object that gets passed to pcls() containing details of the penalised constrained model we want to fit ## Fill in G, the object pcsl needs to fit; this is just what `pcls` says it needs: ## X is the model matrix (of the basis functions) ## C is the identifiability constraints - no constraints needed here ## for the single smooth ## sp are the smoothness parameters from the unconstrained GAM ## p/xp are the knot locations again, but negated for a decreasing function ## y is the response data ## w are weights and this is fancy code for a vector of 1s of length(y) G <- list(X = sm$X, C = matrix(0,0,0), sp = unc$sp, p = -sm$xp, # note the - here! This is for decreasing fits! y = df$y, w = df$y*0+1) G$Ain <- F$A # the monotonicity constraint matrix G$bin <- F$b # the monotonicity constraint vector, both from mono.con G$S <- sm$S # the penalty matrix for the cubic spline G$off <- 0 # location of offsets in the penalty matrix Now we can finally do the fitting ## Do the constrained fit p <- pcls(G) # fit spline (using s.p. from unconstrained fit) p contains a vector of coefficients for the basis functions corresponding to the spline. To visualize the fitted spline, we can predict from the model at 100 locations over the range of x. We do 100 values so as to get a nice smooth line on the plot. ## predict at 100 locations over range of x - get a smooth line on the plot newx <- with(df, data.frame(x = seq(min(x), max(x), length = 100))) To generate predicted values we use Predict.matrix() , which generates a matrix such that when multiple by coefficients p yields predicted values from the fitted model: fv <- Predict.matrix(sm, newx) %*% p newx <- transform(newx, yhat = fv[,1]) plot(y ~ x, data = df, pch = 16) lines(yhat ~ x, data = newx, col = "red") This produces: I'll leave it up to you to get the data into a tidy form for plotting with ggplot ... You can force a closer fit (to partially answer your question about having the smoother fit the first data point) by increasing the dimension of the basis function of x . For example, setting k equal to 8 ( k <- 8 ) and rerunning the code above we get You can't push k much higher for these data, and you have to be careful about over fitting; all pcls() is doing is solving the penalised least squares problem given the constraints and the supplied basis functions, it's not performing smoothness selection for you - not that I know of...) If you want interpolation, then see the base R function ?splinefun which has Hermite splines and cubic splines with monotonicty constraints. In this case you can't use this however as the data are not strictly monotonic.
{ "source": [ "https://stats.stackexchange.com/questions/197517", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/105780/" ] }
197,827
I'm having some difficulty understanding how to interpret variable importance output from the Random Forest package. Mean decrease in accuracy is usually described as "the decrease in model accuracy from permuting the values in each feature". Is this a statement about the feature as a whole or about specific values within the feature? In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the feature) in question from the model? Say we have the following model: require(randomForest) data(iris) set.seed(1) dat <- iris dat $Species <- factor(ifelse(dat$ Species=='virginica','virginica','other')) model.rf <- randomForest(Species~., dat, ntree=25, importance=TRUE, nodesize=5) model.rf varImpPlot(model.rf) Call: randomForest(formula = Species ~ ., data = dat, ntree = 25, proximity = TRUE, importance = TRUE, nodesize = 5) Type of random forest: classification Number of trees: 25 No. of variables tried at each split: 2 OOB estimate of error rate: 3.33% Confusion matrix: other virginica class.error other 97 3 0.03 virginica 2 48 0.04 In this model, the OOB rate is rather low (around 5%). Yet, the Mean Decrease in Accuracy for the predictor (Petal.Width) with the highest value in this measure is only around 8. Does this mean that removing Petal.Width from the model would only result in an additional misclassification of approximately 8 observations on average? How could the Mean Decrease in Accuracy for Petal.Length be so low, given that it's the highest in this measure, and thus the other variables have even lower values on this measure?
" Is this a statement about the feature as a whole or about specific values within the feature? " "Global" variable importance is the mean decrease of accuracy over all out-of-bag cross validated predictions, when a given variable is permuted after training, but before prediction. "Global" is implicit. Local variable importance is the mean decrease of accuracy by each individual out-of-bag cross validated prediction. Global variable importance is the most popular, as it is a single number per variable, easier to understand, and more robust as it is averaged over all predictions. " In either case, is the Mean Decrease in Accuracy the number or proportion of observations that are incorrectly classified by removing the feature (or values from the feature) in question from the model? " train forest measure out-of-bag CV accuracy β†’ OOB_acc_base permute variable i measure out-of-bag CV accuracy β†’ OOB_acc_perm_i VI_i = - (OOB_acc_perm_i - OOB_acc_base) -" Does this mean that removing Petal.Length from the model would only result in an additional misclassification of 8 or so observations on average? " Yep. Both Petal.length and Petal.width alone has almost perfect linear separation. Thus the variables share redundant information and permuting only one does not obstruct the model. " How could the Mean Decrease in Accuracy for Petal.Length be so low, given that it's the highest in this measure, and thus the other variables have even lower values on this measure? " When a robust/regularized model is trained on redundant variables, it is quite resistant to permutations in single variables. Mainly use variable importance mainly to rank the usefulness of your variables. A clear interpretation of the absolute values of variable importance is hard to do well. GINI: GINI importance measures the average gain of purity by splits of a given variable. If the variable is useful, it tends to split mixed labeled nodes into pure single class nodes. Splitting by a permuted variables tend neither to increase nor decrease node purities. Permuting a useful variable, tend to give relatively large decrease in mean gini-gain. GINI importance is closely related to the local decision function, that random forest uses to select the best available split. Therefore, it does not take much extra time to compute. On the other hand, mean gini-gain in local splits, is not necessarily what is most useful to measure, in contrary to change of overall model performance. Gini importance is overall inferior to (permutation based) variable importance as it is relatively more biased, more unstable and tend to answer a more indirect question.
{ "source": [ "https://stats.stackexchange.com/questions/197827", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/85950/" ] }
198,038
I read this page: http://neuralnetworksanddeeplearning.com/chap3.html and it said that sigmoid output layer with cross-entropy is quite similiar with softmax output layer with log-likelihood. what happen if I use sigmoid with log-likelihood or softmax with cross entropy in the output layer? is it fine? becuase I see there's only little difference in equation between cross entropy (eq.57): $$C = -\frac{1}{n} \sum\limits_x (y \ln a + (1-y) \ln (1-a))$$ and log likelihood (eq.80): $$C =-\frac{1}{n} \sum\limits_x(\ln a^L_y)$$
The negative log likelihood (eq.80) is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of the same formula. eq.57 is the negative log likelihood of the Bernoulli distribution, whereas eq.80 is the negative log likelihood of the multinomial distribution with one observation (a multiclass version of Bernoulli). For binary classification problems, the softmax function outputs two values (between 0 and 1 and sum to 1) to give the prediction of each class. While the sigmoid function outputs one value (between 0 and 1) to give the prediction of one class (so the other class is 1-p). So eq.80 can't be directly applied to the sigmoid output, though it is essentially the same loss as eq.57. Also see this answer . Following is a simple illustration of the connection between (sigmoid + binary cross-entropy) and (softmax + multiclass cross-entropy) for binary classification problems. Say we take $0.5$ as the split point of the two categories, for sigmoid output it follows, $$\sigma(wx+b)=0.5$$ $$wx+b=0$$ which is the decision boundary in the feature space. For softmax output it follows $$\frac{e^{w_1x+b_1}}{e^{w_1x+b_1}+e^{w_2x+b_2}}=0.5$$ $$e^{w_1x+b_1}=e^{w_2x+b_2}$$ $$w_1x+b_1=w_2x+b_2$$ $$(w_1-w_2)x+(b_1-b_2)=0$$ so it remains the same model although there are twice as many parameters. The followings show the decision boundaries obtained using theses two methods, which are almost identical.
{ "source": [ "https://stats.stackexchange.com/questions/198038", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/105981/" ] }
198,061
I've noticed lately that a lot of people are developing tensor equivalents of many methods (tensor factorization, tensor kernels, tensors for topic modeling, etc) I'm wondering, why is the world suddenly fascinated with tensors? Are there recent papers/ standard results that are particularly surprising, that brought about this? Is it computationally a lot cheaper than previously suspected? I'm not being glib, I sincerely am interested, and if there are any pointers to papers about this, I'd love to read them.
Tensors often offer more natural representations of data, e.g., consider video, which consists of obviously correlated images over time. You can turn this into a matrix, but it's just not natural or intuitive (what does a factorization of some matrix-representation of video mean?). Tensors are trending for several reasons: our understanding of multilinear algebra is improving rapidly, specifically in various types of factorizations, which in turn helps us to identify new potential applications (e.g., multiway component analysis ) software tools are emerging (e.g., Tensorlab ) and are being welcomed Big Data applications can often be solved using tensors, for example recommender systems , and Big Data itself is hot increases in computational power, as some tensor operations can be hefty (this is also one of the major reasons why deep learning is so popular now)
{ "source": [ "https://stats.stackexchange.com/questions/198061", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/71319/" ] }
198,067
I'm currently teaching myself how to do classification, and specifically I'm looking at three methods: support vector machines, neural networks, and logistic regression. What I am trying to understand is why logistic regression would ever perform better than the other two. From my understanding of logistic regression, the idea is to fit a logistic function to the entire data. So if my data is binary, all my data with label 0 should be mapped to the value 0 (or close to it), and all my data with value 1 should be mapped to value 1 (or close to it). Now, because the logistic function is continuous and smooth, performing this regression requires all my data to fit the curve; there is no greater importance applied to data points near the decision boundary, and all data points contribute to the loss by different amounts. However, with support vector machines and neural networks, only those data points near the decision boundary are important; as long as a data point remains on the same side of the decision boundary, it will contribute the same loss. Therefore, why would logistic regression ever outperform support vector machines or neural networks, given that it "wastes resources" on trying to fit a curve to lots of unimportant (easily-classifiable) data, rather than focussing only on the difficult data around the decision boundary?
The resources you consider to be "wasted" are, in fact, information gains provided by logistic regression. You started out with the wrong premise. Logistic regression is not a classifier. It is a probability/risk estimator. Unlike SVM, it allows for and expects "close calls". It will lead to optimum decision making because it does not try to trick the predictive signal into incorporating a utility function that is implicit whenever you classify observations. The goal of logistic regression using maximum likelihood estimation is to provide optimum estimates of Prob$(Y=1|X)$. The result is used in many ways, e.g. lift curves, credit risk scoring, etc. See Nate Silver's book Signal and the Noise for compelling arguments in favor of probabilistic reasoning. Note that the dependent variable $Y$ in logistic regression can be coded any way you want: 0/1, A/B, yes/no, etc. The primary assumption of logistic regression is that $Y$ is truly binary, e.g. it was not contrived from an underlying ordinal or continuous response variable. It, like classification methods, is for truly all-or-nothing phenomena. Some analysts think that logistic regression assumes linearity of predictor effects on the log odds scale. That was only true when DR Cox invented the logistic model in 1958 at a time when computing wasn't available to extend the model using tools such as regression splines. The only real weakness in logistic regression is that you need to specify which interactions you want to allow in the model. For most datasets this is turned into a strength because the additive main effects are generally much stronger predictors than interactions, and machine learning methods that give equal priority to interactions can be unstable, hard to interpret, and require larger sample sizes than logistic regression to predict well.
{ "source": [ "https://stats.stackexchange.com/questions/198067", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/72307/" ] }
198,956
I am an econ/stat major. I am aware that economists have tried to modify their assumptions about human behavior and rationality by identifying situations in which people don't behave rationally. For example, suppose I offer you a 100% chance of a \$1000 loss or a 50% chance at a \$2500 loss, people choose the \$2500 option even though the expected value of the latter is a greater loss than a \$1000 guaranteed loss. This is known as "loss aversion". Behavioral economists now study these patterns and try to identify ways humans deviate from those axioms normally assumed to constitute "rational" behavior. Here, I assume it is rational to prefer the least expected loss. I was wondering if statisticians have identified common patterns in data collection that yield biased results in how people interpret data. If there was essentially a "rational" way to collect data, I assume there are examples where humans deviate from this and exhibit "bias". If so, what are the most common biases humans make when collecting or interpreting data?
I think in academia, p-values are very commonly misinterpreted. People tend to forget that the p-value expresses a conditional probability. Even if an experiment has been perfectly conducted and all requisites of the chosen statistical test are met, the false discovery rate is typically much higher than the significance level alpha. The false discovery rate increases with a decrease in statistical power and prevalence of true positives (Colquhoun, 2014; Nuzzo, 2014). In addition people tend to regard their estimates as the truth and the parameter they estimate as random (Haller & Kraus, 2002). For example when they say that in β€œ95% of the cases this identified confidence interval covers the parameter”... Confusion of correlation and causation is probably also a very common error in data interpretation. In terms of data collection, I think a common error is to take the most easily accessible rather than the most representative sample. Colquhoun, D. (2014). An investigation of the false discovery rate and the misinterpretation of P values. Royal Society Open Science, 1–15. Nuzzo, R. (2014). Statistical errors: P values, the β€œgold standard” of statistical validity are not as reliable as many scientists assume. Nature, 506, 150–152. Haller, H. & Kraus, S. (2002): Misinterpretations of Significance: A Problem Students Share with Their Teachers? Methods of Psychological Research Online, Vol.7, No.1
{ "source": [ "https://stats.stackexchange.com/questions/198956", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68473/" ] }
199,077
I know that R has gam and mgcv libraries for generalized additive models. But I am having difficulty finding their counterparts in the Python ecosystem (statsmodels only has prototype in the sandbox). Is anyone aware of existing python libraries? Who knows this might be a good project to develop/contribute to scikit-learn if not.
I've written a Python implementation of GAMs using penalized B-splines. check it out here: https://github.com/dswah/pyGAM I've included lots of link functions, distributions and features.
{ "source": [ "https://stats.stackexchange.com/questions/199077", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/106882/" ] }
199,080
Given that today is a leap day, does anyone know the probability of being born on a leap day?
Sure. See here for a more detailed explanation: http://www.public.iastate.edu/~mlamias/LeapYear.pdf . But essentially the author concludes, "There are 485 leap years in 2 millennia. So, in 2 millennia, there are $485(366) + (2000-485)(365)= 730485$ total days. Of those days, February 29 occurs in 485 of them (the leap years), so the probability is $485/730485=0.0006639424$"
{ "source": [ "https://stats.stackexchange.com/questions/199080", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/7962/" ] }
199,280
We have a random experiment with different outcomes forming the sample space $\Omega,$ on which we look with interest at certain patterns, called events $\mathscr{F}.$ Sigma-algebras (or sigma-fields) are made up of events to which a probability measure $\mathbb{P}$ can be assigned. Certain properties are fulfilled, including the inclusion of the null set $\varnothing$ and the entire sample space, and an algebra that describes unions and intersections with Venn diagrams. Probability is defined as a function between the $\sigma$ -algebra and the interval $[0,1]$ . Altogether, the triple $(\Omega, \mathscr{F}, \mathbb{P})$ forms a probability space . Could someone explain in plain English why the probability edifice would collapse if we didn't have a $\sigma$ -algebra? They are just wedged in the middle with that impossibly calligraphic "F". I trust they are necessary; I see that an event is different from an outcome, but what would go awry without a $\sigma$ -algebras? The question is: In what type of probability problems the definition of a probability space including a $\sigma$ -algebra becomes a necessity? This online document on the Dartmouth University website provides a plain English accessible explanation. The idea is a spinning pointer rotating counterclockwise on a circle of unit perimeter: We begin by constructing a spinner, which consists of a circle of unit circumference and a pointer as shown in [the] Figure. We pick a point on the circle and label it $0$ , and then label every other point on the circle with the distance, say $x$ , from $0$ to that point, measured counterclockwise. The experiment consists of spinning the pointer and recording the label of the point at the tip of the pointer. We let the random variable $X$ denote the value of this outcome. The sample space is clearly the interval $[0,1)$ . We would like to construct a probability model in which each outcome is equally likely to occur. If we proceed as we did [...] for experiments with a finite number of possible outcomes, then we must assign the probability $0$ to each outcome, since otherwise, the sum of the probabilities, over all of the possible outcomes, would not equal 1. (In fact, summing an uncountable number of real numbers is a tricky business; in particular, in order for such a sum to have any meaning, at most countably many of the summands can be different than $0$ .) However, if all of the assigned probabilities are $0$ , then the sum is $0$ , not $1$ , as it should be. So if we assigned to each point any probability, and given that there is an (uncountably) infinity number of points, their sum would add up to $> 1$ .
To Xi'an's first point: When you're talking about $\sigma$ -algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. I'll try to build up to that gently, though. A theory of probability admitting all subsets of uncountable sets will break mathematics Consider this example. Suppose you have a unit square in $\mathbb{R}^2$ , and you're interested in the probability of randomly selecting a point that is a member of a specific set in the unit square. In lots of circumstances, this can be readily answered based on a comparison of areas of the different sets. For example, we can draw some circles, measure their areas, and then take the probability as the fraction of the square falling in the circle. Very simple. But what if the area of the set of interest is not well-defined? If the area is not well-defined, then we can reason to two different but completely valid (in some sense) conclusions about what the area is. So we could have $P(A)=1$ on the one hand and $P(A)=0$ on the other hand, which implies $0=1$ . This breaks all of math beyond repair. You can now prove $5<0$ and a number of other preposterous things. Clearly this isn't too useful. $\boldsymbol{\sigma}$ -algebras are the patch that fixes math What is a $\sigma$ -algebra, precisely? It's actually not that frightening. It's just a definition of which sets may be considered as events. Elements not in $\mathscr{F}$ simply have no defined probability measure. Basically, $\sigma$ -algebras are the "patch" that lets us avoid some pathological behaviors of mathematics, namely non-measurable sets. The three requirements of a $\sigma$ -field can be considered as consequences of what we would like to do with probability: A $\sigma$ -field is a set that has three properties: Closure under countable unions. Closure under countable intersections. Closure under complements. The countable unions and countable intersections components are direct consequences of the non-measurable set issue. Closure under complements is a consequence of the Kolmogorov axioms: if $P(A)=2/3$ , $P(A^c)$ ought to be $1/3$ . But without (3), it could happen that $P(A^c)$ is undefined. That would be strange. Closure under complements and the Kolmogorov axioms let us to say things like $P(A\cup A^c)=P(A)+1-P(A)=1$ . Finally, We are considering events in relation to $\Omega$ , so we further require that $\Omega\in\mathscr{F}$ Good news: $\boldsymbol{\sigma}$ -algebras are only strictly necessary for uncountable sets But! There's good news here, also. Or, at least, a way to skirt the issue. We only need $\sigma$ -algebras if we're working in a set with uncountable cardinality. If we restrict ourselves to countable sets, then we can take $\mathscr{F}=2^\Omega$ the power set of $\Omega$ and we won't have any of these problems because for countable $\Omega$ , $2^\Omega$ consists only of measurable sets. (This is alluded to in Xi'an's second comment.) You'll notice that some textbooks will actually commit a subtle sleight-of-hand here, and only consider countable sets when discussing probability spaces. Additionally, in geometric problems in $\mathbb{R}^n$ , it's perfectly sufficient to only consider $\sigma$ -algebras composed of sets for which the $\mathcal{L}^n$ measure is defined. To ground this somewhat more firmly, $\mathcal{L}^n$ for $n=1,2,3$ corresponds to the usual notions of length, area and volume. So what I'm saying in the previous example is that the set needs to have a well-defined area for it to have a geometric probability assigned to it. And the reason is this: if we admit non-measureable sets, then we can end up in situations where we can assign probability 1 to some event based on some proof, and probability 0 to the same event event based on some other proof. But don't let the connection to uncountable sets confuse you! A common misconception that $\sigma$ -algebras are countable sets. In fact, they may be countable or uncountable. Consider this illustration: as before, we have a unit square. Define $$\mathscr{F}=\text{All subsets of the unit square with defined $\mathcal{L}^2$ measure}.$$ You can draw a square $B$ with side length $s$ for all $s \in (0,1)$ , and with one corner at $(0,0)$ . It should be clear that this square is a subset of the unit square. Moreover, all of these squares have defined area, so these squares are elements of $\mathscr{F}$ . But it should also be clear that there are uncountably many squares $B$ : the number of such squares is uncountable, and each square has defined Lebesgue measure. So as a practical matter, simply making that observation is often enough to make the observation that you only consider Lebesgue-measurable sets to gain headway against the problem of interest. But wait, what's a non-measurable set? I'm afraid I can only shed a little bit of light on this myself. But the Banach-Tarski paradox (sometimes the "sun and pea" paradox) can help us some: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their shape. However, the pieces themselves are not "solids" in the usual sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces. A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball), either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox". 1 So if you're working with probabilities in $\mathbb{R}^3$ and you're using the geometric probability measure (the ratio of volumes), you want to work out the probability of some event. But you'll struggle to define that probability precisely, because you can rearrange the sets of your space to change volumes! If probability depends on volume, and you can change the volume of the set to be the size of the sun or the size of a pea, then the probability will also change. So no event will have a single probability ascribed to it. Even worse, you can rearrange $S\in\Omega$ such that the volume of $S$ has $V(S)>V(\Omega)$ , which implies that the geometric probability measure reports a probability $P(S)>1$ , in flagrant violation of the Kolmogorov axioms which require that probability has measure 1. To resolve this paradox, one could make one of four concessions: The volume of a set might change when it is rotated. The volume of the union of two disjoint sets might be different from the sum of their volumes. The axioms of Zermelo–Fraenkel set theory with the axiom of Choice (ZFC) might have to be altered. Some sets might be tagged "non-measurable", and one would need to check whether a set is "measurable" before talking about its volume. Option (1) doesn't help use define probabilities, so it's out. Option (2) violates the second Kolmogorov axiom, so it's out. Option (3) seems like a terrible idea because ZFC fixes so many more problems than it creates. But option (4) seems attractive: if we develop a theory of what is and is not measurable, then we will have well-defined probabilities in this problem! This brings us back to measure theory, and our friend the $\sigma$ -algebra.
{ "source": [ "https://stats.stackexchange.com/questions/199280", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67822/" ] }
199,605
How does the reparameterization trick for variational autoencoders (VAE) work? Is there an intuitive and easy explanation without simplifying the underlying math? And why do we need the 'trick'?
After reading through Kingma's NIPS 2015 workshop slides , I realized that we need the reparameterization trick in order to backpropagate through a random node. Intuitively, in its original form, VAEs sample from a random node $z$ which is approximated by the parametric model $q(z \mid \phi, x)$ of the true posterior. Backprop cannot flow through a random node. Introducing a new parameter $\epsilon$ allows us to reparameterize $z$ in a way that allows backprop to flow through the deterministic nodes.
{ "source": [ "https://stats.stackexchange.com/questions/199605", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98120/" ] }
200,063
In most Tensorflow code I have seen Adam Optimizer is used with a constant Learning Rate of 1e-4 (i.e. 0.0001). The code usually looks the following: ...build the model... # Add the optimizer train_op = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) # Add the ops to initialize variables. These will include # the optimizer slots added by AdamOptimizer(). init_op = tf.initialize_all_variables() # launch the graph in a session sess = tf.Session() # Actually intialize the variables sess.run(init_op) # now train your model for ...: sess.run(train_op) I am wondering, whether it is useful to use exponential decay when using adam optimizer, i.e. use the following Code: ...build the model... # Add the optimizer step = tf.Variable(0, trainable=False) rate = tf.train.exponential_decay(0.15, step, 1, 0.9999) optimizer = tf.train.AdamOptimizer(rate).minimize(cross_entropy, global_step=step) # Add the ops to initialize variables. These will include # the optimizer slots added by AdamOptimizer(). init_op = tf.initialize_all_variables() # launch the graph in a session sess = tf.Session() # Actually intialize the variables sess.run(init_op) # now train your model for ...: sess.run(train_op) Usually, people use some kind of learning rate decay, for Adam it seems uncommon. Is there any theoretical reason for this? Can it be useful to combine Adam optimizer with decay?
Empirically speaking: definitely try it out, you may find some very useful training heuristics, in which case, please do share! Usually people use some kind of decay, for Adam it seems uncommon. Is there any theoretical reason for this? Can it be useful to combine Adam optimizer with decay? I haven't seen enough people's code using ADAM optimizer to say if this is true or not. If it is true, perhaps it's because ADAM is relatively new and learning rate decay "best practices" haven't been established yet. I do want to note however that learning rate decay is actually part of the theoretical guarantee for ADAM. Specifically in Theorem 4.1 of their ICLR article , one of their hypotheses is that the learning rate has a square root decay, $\alpha_t = \alpha/\sqrt{t}$. Furthermore, for their logistic regression experiments they use the square root decay as well. Simply put: I don't think anything in the theory discourages using learning rate decay rules with ADAM. I have seen people report some good results using ADAM and finding some good training heuristics would be incredibly valuable.
{ "source": [ "https://stats.stackexchange.com/questions/200063", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/107535/" ] }
200,500
We already have multiple threads tagged as p-values that reveal lots of misunderstandings about them. Ten months ago we had a thread about psychological journal that "banned" $p$-values , now American Statistical Association (2016) says that with our analysis we "should not end with the calculation of a $p$-value". American Statistical Association (ASA) believes that the scientific community could benefit from a formal statement clarifying several widely agreed upon principles underlying the proper use and interpretation of the $p$-value. The committee lists other approaches as possible alternatives or supplements to $p$-values: In view of the prevalent misuses of and misconceptions concerning $p$-values, some statisticians prefer to supplement or even replace $p$-values with other approaches. These include methods that emphasize estimation over testing, such as confidence, credibility, or prediction intervals; Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors; and other approaches such as decision-theoretic modeling and false discovery rates. All these measures and approaches rely on further assumptions, but they may more directly address the size of an effect (and its associated uncertainty) or whether the hypothesis is correct. So let's imagine post-$p$-values reality. ASA lists some methods that can be used in place of $p$-values, but why are they better? Which of them can be real-life replacement for a researcher that used $p$-values for all his life? I imagine that this kind of questions will appear in post-$p$-values reality, so maybe let's try to be one step ahead of them. What is the reasonable alternative that can be applied out-of-the-box? Why this approach should convince your lead researcher, editor, or readers? As this follow-up blog entry suggests, $p$-values are unbeatable in their simplicity: The p-value requires only a statistical model for the behavior of a statistic under the null hypothesis to hold. Even if a model of an alternative hypothesis is used for choosing a β€œgood” statistic (which would be used for constructing the p-value), this alternative model does not have to be correct in order for the p-value to be valid and useful (i.e.: control type I error at the desired level while offering some power to detect a real effect). In contrast, other (wonderful and useful) statistical methods such as Likelihood ratios, effect size estimation, confidence intervals, or Bayesian methods all need the assumed models to hold over a wider range of situations, not merely under the tested null. Are they, or maybe it is not true and we can easily replace them? I know, this is broad, but the main question is simple: what is the best (and why), real-life alternative to $p$-values that can be used as a replacement? ASA (2016). ASA Statement on Statistical Significance and $P$-values. The American Statistician. (in press)
I will focus this answer on the specific question of what are the alternatives to $p$ -values. There are 21 discussion papers published along with the ASA statement (as Supplemental Materials): by Naomi Altman, Douglas Altman, Daniel J. Benjamin, Yoav Benjamini, Jim Berger, Don Berry, John Carlin, George Cobb, Andrew Gelman, Steve Goodman, Sander Greenland, John Ioannidis, Joseph Horowitz, Valen Johnson, Michael Lavine, Michael Lew, Rod Little, Deborah Mayo, Michele Millar, Charles Poole, Ken Rothman, Stephen Senn, Dalene Stangl, Philip Stark and Steve Ziliak (some of them wrote together; I list all for future searches). These people probably cover all existing opinions about $p$ -values and statistical inference. I have looked through all 21 papers. Unfortunately, most of them do not discuss any real alternatives, even though the majority are about the limitations, misunderstandings, and various other problems with $p$ -values (for a defense of $p$ -values, see Benjamini, Mayo, and Senn). This already suggests that alternatives, if any, are not easy to find and/or to defend. So let us look at the list of "other approaches" given in the ASA statement itself (as quoted in your question): [Other approaches] include methods that emphasize estimation over testing, such as confidence, credibility, or prediction intervals; Bayesian methods; alternative measures of evidence, such as likelihood ratios or Bayes Factors; and other approaches such as decision-theoretic modeling and false discovery rates. Confidence intervals Confidence intervals are a frequentist tool that goes hand-in-hand with $p$ -values; reporting a confidence interval (or some equivalent, e.g., mean $\pm$ standard error of the mean) together with the $p$ -value is almost always a good idea. Some people (not among the ASA disputants) suggest that confidence intervals should replace the $p$ -values. One of the most outspoken proponents of this approach is Geoff Cumming who calls it new statistics (a name that I find appalling). See e.g. this blog post by Ulrich Schimmack for a detailed critique: A Critical Review of Cumming’s (2014) New Statistics: Reselling Old Statistics as New Statistics . See also We cannot afford to study effect size in the lab blog post by Uri Simonsohn for a related point. See also this thread (and my answer therein) about the similiar suggestion by Norm Matloff where I argue that when reporting CIs one would still like to have the $p$ -values reported as well: What is a good, convincing example in which p-values are useful? Some other people (not among the ASA disputants either), however, argue that confidence intervals, being a frequentist tool, are as misguided as $p$ -values and should also be disposed of. See, e.g., Morey et al. 2015, The Fallacy of Placing Confidence in Confidence Intervals linked by @Tim here in the comments. This is a very old debate. Bayesian methods (I don't like how the ASA statement formulates the list. Credible intervals and Bayes factors are listed separately from "Bayesian methods", but they are obviously Bayesian tools. So I count them together here.) There is a huge and very opinionated literature on the Bayesian vs. frequentist debate. See, e.g., this recent thread for some thoughts: When (if ever) is a frequentist approach substantively better than a Bayesian? Bayesian analysis makes total sense if one has good informative priors, and everybody would be only happy to compute and report $p(\theta|\text{data})$ or $p(H_0:\theta=0|\text{data})$ instead of $p(\text{data at least as extreme}|H_0)$ β€”but alas, people usually do not have good priors. An experimenter records 20 rats doing something in one condition and 20 rats doing the same thing in another condition; the prediction is that the performance of the former rats will exceed the performance of the latter rats, but nobody would be willing or indeed able to state a clear prior over the performance differences. (But see @FrankHarrell's answer where he advocates using "skeptical priors".) Die-hard Bayesians suggest to use Bayesian methods even if one does not have any informative priors. One recent example is Krushke, 2012, Bayesian estimation supersedes the $t$ -test , humbly abbreviated as BEST. The idea is to use a Bayesian model with weak uninformative priors to compute the posterior for the effect of interest (such as, e.g., a group difference). The practical difference with frequentist reasoning seems usually to be minor, and as far as I can see this approach remains unpopular. See What is an "uninformative prior"? Can we ever have one with truly no information? for the discussion of what is "uninformative" (answer: there is no such thing, hence the controversy). An alternative approach, going back to Harold Jeffreys, is based on Bayesian testing (as opposed to Bayesian estimation ) and uses Bayes factors. One of the more eloquent and prolific proponents is Eric-Jan Wagenmakers, who has published a lot on this topic in recent years. Two features of this approach are worth emphasizing here. First, see Wetzels et al., 2012, A Default Bayesian Hypothesis Test for ANOVA Designs for an illustration of just how strongly the outcome of such a Bayesian test can depend on the specific choice of the alternative hypothesis $H_1$ and the parameter distribution ("prior") it posits. Second, once a "reasonable" prior is chosen (Wagenmakers advertises Jeffreys' so called "default" priors), resulting Bayes factors often turn out to be quite consistent with the standard $p$ -values, see e.g. this figure from this preprint by Marsman & Wagenmakers : So while Wagenmakers et al. keep insisting that $p$ -values are deeply flawed and Bayes factors are the way to go, one cannot but wonder... (To be fair, the point of Wetzels et al. 2011 is that for $p$ -values close to $0.05$ Bayes factors only indicate very weak evidence against the null; but note that this can be easily dealt with in a frequentist paradigm simply by using a more stringent $\alpha$ , something that a lot of people are advocating anyway.) One of the more popular papers by Wagenmakers et al. in the defense of Bayes factors is 2011, Why psychologists must change the way they analyze their data: The case of psi where he argues that infamous Bem's paper on predicting the future would not have reached their faulty conclusions if only they had used Bayes factors instead of $p$ -values. See this thoughtful blog post by Ulrich Schimmack for a detailed (and IMHO convincing) counter-argument: Why Psychologists Should Not Change The Way They Analyze Their Data: The Devil is in the Default Prior . See also The Default Bayesian Test is Prejudiced Against Small Effects blog post by Uri Simonsohn. For completeness, I mention that Wagenmakers 2007, A practical solution to the pervasive problems of $p$ -values suggested to use BIC as an approximation to Bayes factor to replace the $p$ -values. BIC does not depend on the prior and hence, despite its name, is not really Bayesian; I am not sure what to think about this proposal. It seems that more recently Wagenmakers is more in favour of Bayesian tests with uninformative Jeffreys' priors, see above. For further discussion of Bayes estimation vs. Bayesian testing, see Bayesian parameter estimation or Bayesian hypothesis testing? and links therein. Minimum Bayes factors Among the ASA disputants, this is explicitly suggested by Benjamin & Berger and by Valen Johnson (the only two papers that are all about suggesting a concrete alternative). Their specific suggestions are a bit different but they are similar in spirit. The ideas of Berger go back to the Berger & Sellke 1987 and there is a number of papers by Berger, Sellke, and collaborators up until last year elaborating on this work. The idea is that under a spike and slab prior where point null $\mu=0$ hypothesis gets probability $0.5$ and all other values of $\mu$ get probability $0.5$ spread symmetrically around $0$ ("local alternative"), then the minimal posterior $p(H_0)$ over all local alternatives, i.e. the minimal Bayes factor , is much higher than the $p$ -value. This is the basis of the (much contested) claim that $p$ -values "overstate the evidence" against the null. The suggestion is to use a lower bound on Bayes factor in favour of the null instead of the $p$ -value; under some broad assumptions this lower bound turns out to be given by $-ep\log(p)$ , i.e., the $p$ -value is effectively multiplied by $-e\log(p)$ which is a factor of around $10$ to $20$ for the common range of $p$ -values. This approach has been endorsed by Steven Goodman too. Later update: See a nice cartoon explaining these ideas in a simple way. Even later update: See Held & Ott, 2018, On $p$ -Values and Bayes Factors for a comprehensive review and further analysis of converting $p$ -values to minimum Bayes factors. Here is one table from there: Valen Johnson suggested something similar in his PNAS 2013 paper ; his suggestion approximately boils down to multiplying $p$ -values by $\sqrt{-4\pi\log(p)}$ which is around $5$ to $10$ . For a brief critique of Johnson's paper, see Andrew Gelman's and @Xi'an's reply in PNAS. For the counter-argument to Berger & Sellke 1987, see Casella & Berger 1987 (different Berger!). Among the APA discussion papers, Stephen Senn argues explicitly against any of these approaches: Error probabilities are not posterior probabilities. Certainly, there is much more to statistical analysis than $P$ -values but they should be left alone rather than being deformed in some way to become second class Bayesian posterior probabilities. See also references in Senn's paper, including the ones to Mayo's blog. ASA statement lists "decision-theoretic modeling and false discovery rates" as another alternative. I have no idea what they are talking about, and I was happy to see this stated in the discussion paper by Stark: The "other approaches" section ignores the fact that the assumptions of some of those methods are identical to those of $p$ -values. Indeed, some of the methods use $p$ -values as input (e.g., the False Discovery Rate). I am highly skeptical that there is anything that can replace $p$ -values in actual scientific practice such that the problems that are often associated with $p$ -values (replication crisis, $p$ -hacking, etc.) would go away. Any fixed decision procedure, e.g. a Bayesian one, can probably be "hacked" in the same way as $p$ -values can be $p$ -hacked (for some discussion and demonstration of this see this 2014 blog post by Uri Simonsohn ). To quote from Andrew Gelman's discussion paper: In summary, I agree with most of the ASA’s statement on $p$ -values but I feel that the problems are deeper, and that the solution is not to reform $p$ -values or to replace them with some other statistical summary or threshold, but rather to move toward a greater acceptance of uncertainty and embracing of variation. And from Stephen Senn: In short, the problem is less with $P$ -values per se but with making an idol of them. Substituting another false god will not help. And here is how Cohen put it into his well-known and highly-cited (3.5k citations) 1994 paper The Earth is round ( $p<0.05$ ) where he argued very strongly against $p$ -values: [...] don't look for a magic alternative to NHST, some other objective mechanical ritual to replace it. It doesn't exist.
{ "source": [ "https://stats.stackexchange.com/questions/200500", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/35989/" ] }
200,745
The phrase p -hacking (also: "data dredging" , "snooping" or "fishing") refers to various kinds of statistical malpractice in which results become artificially statistically significant. There are many ways to procure a "more significant" result, including but by no means limited to: only analysing an "interesting" subset of the data , in which a pattern was found; failing to adjust properly for multiple testing , particularly post-hoc testing and failing to report tests carried out that were not significant; trying different tests of the same hypothesis , e.g. both a parametric and a non-parametric test ( there's some discussion of that in this thread ), but only reporting the most significant; experimenting with inclusion/exclusion of data points , until the desired result is obtained. One opportunity comes when "data-cleaning outliers", but also when applying an ambiguous definition (e.g. in an econometric study of "developed countries", different definitions yield different sets of countries), or qualitative inclusion criteria (e.g. in a meta-analysis, it may be a finely balanced argument whether a particular study's methodology is sufficient robust to include); the previous example is related to optional stopping , i.e., analyzing a dataset and deciding on whether to collect more data or not depending on the data collected so far ("this is almost significant, let's measure three more students!") without accounting for this in the analysis; experimentation during model-fitting , particularly covariates to include, but also regarding data transformations/functional form. So we know p -hacking can be done. It is often listed as one of the "dangers of the p -value" and was mentioned in the ASA report on statistical significance, discussed here on Cross Validated , so we also know it's a Bad Thing. Although some dubious motivations and (particularly in the competition for academic publication) counterproductive incentives are obvious, I suspect it's hard to figure out quite why it's done, whether deliberate malpractice or simple ignorance. Someone reporting p -values from a stepwise regression (because they find stepwise procedures "produce good models", but aren't aware the purported p -values are invalidated) is in the latter camp, but the effect is still p -hacking under the last of my bullet points above. There's certainly evidence that p -hacking is "out there", e.g. Head et al (2015) looks for tell-tale signs of it infecting the scientific literature, but what is the current state of our evidence base about it? I'm aware that the approach taken by Head et al was not without controversy, so the current state of the literature, or general thinking in the academic community, would be interesting. For instance do we have any idea about: Just how prevalent is it, and to what extent can we differentiate its occurrence from publication bias ? (Is this distinction even meaningful?) Is the effect particularly acute at the $p \approx 0.05$ boundary? Are similar effects seen at $p \approx 0.01$, for instance, or do we see whole ranges of p -values affected? Do patterns in p -hacking vary between academic fields? Do we have any idea which of the mechanisms of p -hacking (some of which are listed in the bullet points above) are most common? Have some forms proven harder to detect than others because they are "better disguised"? References Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p -hacking in science . PLoS Biol , 13 (3), e1002106.
EXECUTIVE SUMMARY: if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. Andrew Gelman likes to write about this topic and has been posting extensively about it lately on his blog. I don't always agree with him but I like his perspective on $p$ -hacking. Here is an excerpt from the Introduction to his Garden of Forking Paths paper (Gelman & Loken 2013; a version appeared in American Scientist 2014; see also Gelman's brief comment on the ASA's statement), emphasis mine: This problem is sometimes called β€œp-hacking” or β€œresearcher degrees of freedom” (Simmons, Nelson, and Simonsohn, 2011). In a recent article, we spoke of β€œfishing expeditions [...]”. But we are starting to feel that the term β€œfishing” was unfortunate, in that it invokes an image of a researcher trying out comparison after comparison, throwing the line into the lake repeatedly until a fish is snagged. We have no reason to think that researchers regularly do that. We think the real story is that researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances. We regret the spread of the terms β€œfishing” and β€œp-hacking” (and even β€œresearcher degrees of freedom”) for two reasons: first, because when such terms are used to describe a study, there is the misleading implication that researchers were consciously trying out many different analyses on a single data set; and, second, because it can lead researchers who know they did not try out many different analyses to mistakenly think they are not so strongly subject to problems of researcher degrees of freedom. [...] Our key point here is that it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values. So: Gelman does not like the term p-hacking because it implies that the researches were actively cheating. Whereas the problems can occur simply because the researchers choose what test to perform/report after looking at the data, i.e. after doing some exploratory analysis. With some experience of working in biology, I can safely say that everybody does that. Everybody (myself included) collects some data with only vague a priori hypotheses, does extensive exploratory analysis, runs various significance tests, collects some more data, runs and re-runs the tests, and finally reports some $p$ -values in the final manuscript. All of this is happening without actively cheating, doing dumb xkcd-jelly-beans-style cherry-picking, or consciously hacking anything. So if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. The only exceptions that come to mind are fully pre-registered replication studies in psychology or fully pre-registered medical trials. Specific evidence Amusingly, some people polled researchers to find that many admit doing some sort of hacking ( John et al. 2012, Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling ): Apart from that, everybody heard about the so called "replication crisis" in psychology: more than one half of the recent studies published in the top psychology journals do not replicate ( Nosek et al. 2015, Estimating the reproducibility of psychological science ). (This study has recently been all over the blogs again, because the March 2016 issue of Science published a Comment attempting to refute Nosek et al. and also a reply by Nosek et al. The discussion continued elsewhere, see post by Andrew Gelman and the RetractionWatch post that he links to. To put it politely, the critique is unconvincing.) Update Nov 2018: Kaplan and Irvin, 2017, Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time show that the fraction of clinical trials reporting null results increased from 43% to 92% after pre-registration became required: $P$ -value distributions in the literature Head et al. 2015 I have not heard about Head et al. study before, but have now spent some time looking through the surrounding literature. I have also taken a brief look at their raw data . Head et al. downloaded all Open Access papers from PubMed and extracted all p-values reported in the text, getting 2.7 mln p-values. Out of these, 1.1 mln was reported as $p=a$ and not as $p<a$ . Out of these, Head et al. randomly took one p-value per paper but this does not seem to change the distribution, so here is how the distribution of all 1.1 mln values looks like (between $0$ and $0.06$ ): I used $0.0001$ bin width, and one can clearly see a lot of predictable rounding in the reported $p$ -values. Now, Head et al. do the following: they compare the number of $p$ -values in the $(0.045, 0.5)$ interval and in the $(0.04, 0.045)$ interval; the former number turns out to be (significantly) larger and they take it as an evidence of $p$ -hacking. If one squints, one can see it on my figure. I find this hugely unconvincing for one simple reason. Who wants to report their findings with $p=0.05$ ? Actually, many people seem to be doing exactly that, but still it appears natural to try to avoid this unsatisfactory border-line value and rather to report another significant digit, e.g. $p=0.048$ (unless of course it's $p=0.052$ ). So some excess of $p$ -values close but not equal to $0.05$ can be explained by researcher's rounding preferences. And apart from that, the effect is tiny . (The only strong effect that I can see on this figure is a pronounced drop of the $p$ -value density right after $0.05$ . This is clearly due to the publication bias.) Unless I missed something, Head et al. do not even discuss this potential alternative explanation. They do not present any histogram of the $p$ -values either. There is a bunch of papers criticizing Head et al. In this unpublished manuscript Hartgerink argues that Head et al. should have included $p=0.04$ and $p=0.05$ in their comparison (and if they had, they would not have found their effect). I am not sure about that; it does not sound very convincing. It would be much better if we could somehow inspect the distribution of the "raw" $p$ -values without any rounding. Distributions of $p$ -values without rounding In this 2016 PeerJ paper (preprint posted in 2015) the same Hartgerink et al. extract p-values from lots of papers in top psychology journals and do exactly that: they recompute exact $p$ -values from the reported $t$ -, $F$ -, $\chi^2$ - etc. statistic values; this distribution is free from any rounding artifacts and does not exhibit any increase towards 0.05 whatsoever (Figure 4): $\hspace{5em}$ A very similar approach is taken by Krawczyk 2015 in PLoS One, who extracts 135k $p$ -values from the top experimental psychology journals. Here is how the distribution looks for the reported (left) and recomputed (right) $p$ -values: The difference is striking. The left histogram shows some weird stuff going on around $p=0.05$ , but on the right one it is gone. This means that this weird stuff is due to people's preferences of reporting values around $p\approx 0.05$ and not due to $p$ -hacking. Mascicampo and Lalande It seems that the first to observe the alleged excess of $p$ -values just below 0.05 were Masicampo & Lalande 2012 , looking at three top journals in psychology: This does look impressive, but Lakens 2015 ( preprint ) in a published Comment argues that this only appears impressive thanks to the misleading exponential fit. See also Lakens 2015, On the challenges of drawing conclusions from p-values just below 0.05 and references therein. Economics Brodeur et al. 2016 (the link goes to the 2013 preprint) do the same thing for economics literature. The look at the three economics journals, extract 50k test results, convert all of them into $z$ -scores (using reported coefficients and standard errors whenever possible and using $p$ -values if only they were reported), and get the following: This is a bit confusing because small $p$ -values are on the right and large $p$ -values are on the left. As authors write in the abstract, "The distribution of p-values exhibits a camel shape with abundant p-values above .25" and "a valley between .25 and .10". They argue that this valley is a sign of something fishy, but this is only an indirect evidence. Also, it might be simply due to selective reporting, when large p-values above .25 are reported as some evidence of a lack of effect but p-values between .1 and .25 are felt to be neither here nor there and tend to be omitted. (I am not sure if this effect is present in biological literature or not because the plots above focus on $p<0.05$ interval.) Falsely reassuring? Based on all of the above, my conclusion is that I don't see any strong evidence of $p$ -hacking in $p$ -value distributions across biological/psychological literature as a whole. There is plenty of evidence of selective reporting, publication bias, rounding $p$ -values down to $0.05$ and other funny rounding effects, but I disagree with conclusions of Head et al.: there is no suspicious bump below $0.05$ . Uri Simonsohn argues that this is "falsely reassuring" . Well, actually he cites these papers un-critically but then remarks that "most p-values are way smaller" than 0.05. Then he says: "That’s reassuring, but falsely reassuring". And here is why: If we want to know if researchers p-hack their results, we need to examine the p-values associated with their results, those they may want to p-hack in the first place. Samples, to be unbiased, must only include observations from the population of interest. Most p-values reported in most papers are irrelevant for the strategic behavior of interest. Covariates, manipulation checks, main effects in studies testing interactions, etc. Including them we underestimate p-hacking and we overestimate the evidential value of data. Analyzing all p-values asks a different question, a less sensible one. Instead of β€œDo researchers p-hack what they study?” we ask β€œDo researchers p-hack everything?” This makes total sense. Looking at all reported $p$ -values is way too noisy. Uri's $p$ -curve paper ( Simonsohn et al. 2013 ) nicely demonstrates what one can see if one looks at carefully selected $p$ -values. They selected 20 psychology papers based on some suspicious keywords (namely, authors of these papers reported tests controlling for a covariate and did not report what happens without controlling for it) and then took only $p$ -values that are testing the main findings. Here is how the distribution looks like (left): Strong left skew suggests strong $p$ -hacking. Conclusions I would say that we know that there must be a lot of $p$ -hacking going on, mostly of the Forking-Paths type that Gelman describes; probably to the extent that published $p$ -values cannot really be taken at face value and should be "discounted" by the reader by some substantial fraction. However, this attitude seems to produce much more subtle effects than simply a bump in the overall $p$ -values distribution just below $0.05$ and cannot really be detected by such a blunt analysis.
{ "source": [ "https://stats.stackexchange.com/questions/200745", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22228/" ] }
200,766
I am confused about the hypothesis space of those two classifiers. In the case of linear regression, it's pretty straightforward ; the possible hypothesis are equations of lines, that is, linear combinations of the features. Therefore, the hypothesis space is the space of all possible coefficient values. But for kNN and NB, I don't clearly see what is the space of parameters you can act upon, and therefore I cannot define any hypothesis space. In the case of NB, I have seen it can correspond to a linear separator as well, therefore the hypothesis space would also be all the linear combinations of odds ratios? But in the case of kNN, it is totally unclear to me.
EXECUTIVE SUMMARY: if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. Andrew Gelman likes to write about this topic and has been posting extensively about it lately on his blog. I don't always agree with him but I like his perspective on $p$ -hacking. Here is an excerpt from the Introduction to his Garden of Forking Paths paper (Gelman & Loken 2013; a version appeared in American Scientist 2014; see also Gelman's brief comment on the ASA's statement), emphasis mine: This problem is sometimes called β€œp-hacking” or β€œresearcher degrees of freedom” (Simmons, Nelson, and Simonsohn, 2011). In a recent article, we spoke of β€œfishing expeditions [...]”. But we are starting to feel that the term β€œfishing” was unfortunate, in that it invokes an image of a researcher trying out comparison after comparison, throwing the line into the lake repeatedly until a fish is snagged. We have no reason to think that researchers regularly do that. We think the real story is that researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances. We regret the spread of the terms β€œfishing” and β€œp-hacking” (and even β€œresearcher degrees of freedom”) for two reasons: first, because when such terms are used to describe a study, there is the misleading implication that researchers were consciously trying out many different analyses on a single data set; and, second, because it can lead researchers who know they did not try out many different analyses to mistakenly think they are not so strongly subject to problems of researcher degrees of freedom. [...] Our key point here is that it is possible to have multiple potential comparisons, in the sense of a data analysis whose details are highly contingent on data, without the researcher performing any conscious procedure of fishing or examining multiple p-values. So: Gelman does not like the term p-hacking because it implies that the researches were actively cheating. Whereas the problems can occur simply because the researchers choose what test to perform/report after looking at the data, i.e. after doing some exploratory analysis. With some experience of working in biology, I can safely say that everybody does that. Everybody (myself included) collects some data with only vague a priori hypotheses, does extensive exploratory analysis, runs various significance tests, collects some more data, runs and re-runs the tests, and finally reports some $p$ -values in the final manuscript. All of this is happening without actively cheating, doing dumb xkcd-jelly-beans-style cherry-picking, or consciously hacking anything. So if "p-hacking" is to be understood broadly a la Gelman's forking paths, the answer to how prevalent it is, is that it is almost universal. The only exceptions that come to mind are fully pre-registered replication studies in psychology or fully pre-registered medical trials. Specific evidence Amusingly, some people polled researchers to find that many admit doing some sort of hacking ( John et al. 2012, Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling ): Apart from that, everybody heard about the so called "replication crisis" in psychology: more than one half of the recent studies published in the top psychology journals do not replicate ( Nosek et al. 2015, Estimating the reproducibility of psychological science ). (This study has recently been all over the blogs again, because the March 2016 issue of Science published a Comment attempting to refute Nosek et al. and also a reply by Nosek et al. The discussion continued elsewhere, see post by Andrew Gelman and the RetractionWatch post that he links to. To put it politely, the critique is unconvincing.) Update Nov 2018: Kaplan and Irvin, 2017, Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time show that the fraction of clinical trials reporting null results increased from 43% to 92% after pre-registration became required: $P$ -value distributions in the literature Head et al. 2015 I have not heard about Head et al. study before, but have now spent some time looking through the surrounding literature. I have also taken a brief look at their raw data . Head et al. downloaded all Open Access papers from PubMed and extracted all p-values reported in the text, getting 2.7 mln p-values. Out of these, 1.1 mln was reported as $p=a$ and not as $p<a$ . Out of these, Head et al. randomly took one p-value per paper but this does not seem to change the distribution, so here is how the distribution of all 1.1 mln values looks like (between $0$ and $0.06$ ): I used $0.0001$ bin width, and one can clearly see a lot of predictable rounding in the reported $p$ -values. Now, Head et al. do the following: they compare the number of $p$ -values in the $(0.045, 0.5)$ interval and in the $(0.04, 0.045)$ interval; the former number turns out to be (significantly) larger and they take it as an evidence of $p$ -hacking. If one squints, one can see it on my figure. I find this hugely unconvincing for one simple reason. Who wants to report their findings with $p=0.05$ ? Actually, many people seem to be doing exactly that, but still it appears natural to try to avoid this unsatisfactory border-line value and rather to report another significant digit, e.g. $p=0.048$ (unless of course it's $p=0.052$ ). So some excess of $p$ -values close but not equal to $0.05$ can be explained by researcher's rounding preferences. And apart from that, the effect is tiny . (The only strong effect that I can see on this figure is a pronounced drop of the $p$ -value density right after $0.05$ . This is clearly due to the publication bias.) Unless I missed something, Head et al. do not even discuss this potential alternative explanation. They do not present any histogram of the $p$ -values either. There is a bunch of papers criticizing Head et al. In this unpublished manuscript Hartgerink argues that Head et al. should have included $p=0.04$ and $p=0.05$ in their comparison (and if they had, they would not have found their effect). I am not sure about that; it does not sound very convincing. It would be much better if we could somehow inspect the distribution of the "raw" $p$ -values without any rounding. Distributions of $p$ -values without rounding In this 2016 PeerJ paper (preprint posted in 2015) the same Hartgerink et al. extract p-values from lots of papers in top psychology journals and do exactly that: they recompute exact $p$ -values from the reported $t$ -, $F$ -, $\chi^2$ - etc. statistic values; this distribution is free from any rounding artifacts and does not exhibit any increase towards 0.05 whatsoever (Figure 4): $\hspace{5em}$ A very similar approach is taken by Krawczyk 2015 in PLoS One, who extracts 135k $p$ -values from the top experimental psychology journals. Here is how the distribution looks for the reported (left) and recomputed (right) $p$ -values: The difference is striking. The left histogram shows some weird stuff going on around $p=0.05$ , but on the right one it is gone. This means that this weird stuff is due to people's preferences of reporting values around $p\approx 0.05$ and not due to $p$ -hacking. Mascicampo and Lalande It seems that the first to observe the alleged excess of $p$ -values just below 0.05 were Masicampo & Lalande 2012 , looking at three top journals in psychology: This does look impressive, but Lakens 2015 ( preprint ) in a published Comment argues that this only appears impressive thanks to the misleading exponential fit. See also Lakens 2015, On the challenges of drawing conclusions from p-values just below 0.05 and references therein. Economics Brodeur et al. 2016 (the link goes to the 2013 preprint) do the same thing for economics literature. The look at the three economics journals, extract 50k test results, convert all of them into $z$ -scores (using reported coefficients and standard errors whenever possible and using $p$ -values if only they were reported), and get the following: This is a bit confusing because small $p$ -values are on the right and large $p$ -values are on the left. As authors write in the abstract, "The distribution of p-values exhibits a camel shape with abundant p-values above .25" and "a valley between .25 and .10". They argue that this valley is a sign of something fishy, but this is only an indirect evidence. Also, it might be simply due to selective reporting, when large p-values above .25 are reported as some evidence of a lack of effect but p-values between .1 and .25 are felt to be neither here nor there and tend to be omitted. (I am not sure if this effect is present in biological literature or not because the plots above focus on $p<0.05$ interval.) Falsely reassuring? Based on all of the above, my conclusion is that I don't see any strong evidence of $p$ -hacking in $p$ -value distributions across biological/psychological literature as a whole. There is plenty of evidence of selective reporting, publication bias, rounding $p$ -values down to $0.05$ and other funny rounding effects, but I disagree with conclusions of Head et al.: there is no suspicious bump below $0.05$ . Uri Simonsohn argues that this is "falsely reassuring" . Well, actually he cites these papers un-critically but then remarks that "most p-values are way smaller" than 0.05. Then he says: "That’s reassuring, but falsely reassuring". And here is why: If we want to know if researchers p-hack their results, we need to examine the p-values associated with their results, those they may want to p-hack in the first place. Samples, to be unbiased, must only include observations from the population of interest. Most p-values reported in most papers are irrelevant for the strategic behavior of interest. Covariates, manipulation checks, main effects in studies testing interactions, etc. Including them we underestimate p-hacking and we overestimate the evidential value of data. Analyzing all p-values asks a different question, a less sensible one. Instead of β€œDo researchers p-hack what they study?” we ask β€œDo researchers p-hack everything?” This makes total sense. Looking at all reported $p$ -values is way too noisy. Uri's $p$ -curve paper ( Simonsohn et al. 2013 ) nicely demonstrates what one can see if one looks at carefully selected $p$ -values. They selected 20 psychology papers based on some suspicious keywords (namely, authors of these papers reported tests controlling for a covariate and did not report what happens without controlling for it) and then took only $p$ -values that are testing the main findings. Here is how the distribution looks like (left): Strong left skew suggests strong $p$ -hacking. Conclusions I would say that we know that there must be a lot of $p$ -hacking going on, mostly of the Forking-Paths type that Gelman describes; probably to the extent that published $p$ -values cannot really be taken at face value and should be "discounted" by the reader by some substantial fraction. However, this attitude seems to produce much more subtle effects than simply a bump in the overall $p$ -values distribution just below $0.05$ and cannot really be detected by such a blunt analysis.
{ "source": [ "https://stats.stackexchange.com/questions/200766", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/45348/" ] }
200,815
I am using a classifier which returns probabilities. To calculate AUC, I am using pROC R-package. The output probabilities from classifier are: probs=c(0.9865780, 0.9996340, 0.9516880, 0.9337157, 0.9778576, 0.8140116, 0.8971550, 0.8967585, 0.6322902, 0.7497237) probs shows probability of being in class '1'. As shown, the classifier has classified all of samples in class '1'. True label vector is: truel=c(1, 1, 1, 1, 1, 0, 0, 0, 0, 0) As shown, classifier has misclassified 5 samples. But, AUC is: pROC::auc(truel, probs) Area under the curve: 1 Could you please explain to me why it happens?
The AUC is a measure of the ability to rank examples according to the probability of class membership. Thus if all of the probabilities are above 0.5 you can still have an AUC of one if all of the positive patterns have higher probabilities than all of the negative patterns. In this case there will be a decision threshold that is higher than 0.5, which would give an error rate of zero. Note that because the AUC only measures the ranking of the probabilities, it doesn't tell you if the probabilities are well calibrated (e.g. there is no systematic bias), if calibration of the probabilities is important then look at the cross-entropy metric.
{ "source": [ "https://stats.stackexchange.com/questions/200815", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74753/" ] }
200,982
When performing Bayesian inference, we operate by maximizing our likelihood function in combination with the priors we have about the parameters. Because the log-likelihood is more convenient, we effectively maximize $\sum \ln (\text{prior}) + \sum \ln (\text{likelihood})$ using an MCMC or otherwise which generates the posterior distributions (using a pdf for each parameter's prior and each data point's likelihood). If we have a lot of data, the likelihood from that is going to overwhelm any information that the prior provides, by simple mathematics. Ultimately, this is good and by design; we know that the posterior will converge to just the likelihood with more data because it is supposed to. For problems defined by conjugate priors, this is even provable exactly. Is there a way to decide when priors don't matter for a given likelihood function and some sample size?
It is not that easy. Information in your data overwhelms prior information not only when your sample size is large, but also when your data provides enough information to overwhelm the prior information. Uninformative priors get easily persuaded by data, while strongly informative ones may be more resistant. In extreme case, with ill-defined priors, your data may not be able at all to overcome it (e.g. zero density over some region). Recall that by Bayes theorem we use two sources of information in our statistical model, out-of-data, prior information, and information conveyed by data in likelihood function: $$ \color{violet}{\text{posterior}} \propto \color{red}{\text{prior}} \times \color{lightblue}{\text{likelihood}} $$ When using uninformative prior (or maximum likelihood), we try to bring minimal possible prior information into our model. With informative priors we bring substantial amount of information into the model. So both, the data and prior, inform us what values of estimated parameters are more plausible, or believable. They can bring different information and each of them can overpower the other one in some cases. Let me illustrate this with very basic beta-binomial model (see here for detailed example ). With "uninformative" prior , pretty small sample may be enough to overpower it. On the plots below you can see priors (red curve), likelihood (blue curve), and posteriors (violet curve) of the same model with different sample sizes. On another hand, you can have informative prior that is close to the true value, that would also be easily, but not that easily as with weekly informative one, persuaded by data. The case is very different with informative prior, when it is far from what the data says (using the same data as in first example). In such case you need larger sample to overcome the prior. So it is not only about sample size, but also about what is your data and what is your prior. Notice that this is a desired behavior, because when using informative priors we want to potentially include out-of-data information in our model and this would be impossible if large samples would always discard the priors. Because of complicated posterior-likelihood-prior relations, it is always good to look at the posterior distribution and do some posterior predictive checks (Gelman, Meng and Stern, 1996; Gelman and Hill, 2006; Gelman et al, 2004). Moreover, as described by Spiegelhalter (2004), you can use different priors, for example "pessimistic" that express doubts about large effects, or "enthusiastic" that are optimistic about estimated effects. Comparing how different priors behave with your data may help to informally assess the extent how posterior was influenced by prior. Spiegelhalter, D. J. (2004). Incorporating Bayesian ideas into health-care evaluation. Statistical Science, 156-174. Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2004). Bayesian data analysis. Chapman & Hall/CRC. Gelman, A. and Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press. Gelman, A., Meng, X. L., and Stern, H. (1996). Posterior predictive assessment of model fitness via realized discrepancies. Statistica sinica, 733-760.
{ "source": [ "https://stats.stackexchange.com/questions/200982", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/56282/" ] }
201,038
It occurred to me today that the distribution $$ f(x)\propto\exp\left(-\frac{|x-\mu|^p}{\beta}\right) $$ could be viewed as a compromise between the Gaussian and Laplace distributions, for $x\in\mathbb{R}, p\in[1,2]$ and $\beta>0.$ Does such a distribution have a name? And does it have an expression for its normalization constant? The calculus stumps me, because I don't know how to even begin solving for $C$ in the integral $$ 1=C\cdot \int_{-\infty}^\infty \exp\left(-\frac{|x-\mu|^p}{\beta}\right) dx $$
Short Answer The pdf you describe is most appropriately known as a Subbotin distribution ... see the paper in 1923 by Subbotin which has exactly the same functional form, with say $Y = X-\mu$. Subbotin, M. T. (1923), On the law of frequency of error, Matematicheskii Sbornik, 31, 296-301. who enters the pdf at his equation 5, of form: $$f(y) = K \exp\left[-\left(\frac{|y|}{\sigma}\right)^p\right]$$ with constant of integration: $K = \large\frac{p}{2 \sigma \Gamma \left(\frac{1}{p}\right)}$, as per Xian's derivation where $\beta = \sigma^p$ Longer answer Wikipedia is unfortunately not always 'up to date', or accurate, or sometimes just 80 years behind the times. After Subbotin (1923), the distribution has been widely used in the literature, including: Diananda, P.H. (1949), Note on some properties of maximum likelihood estimates, Proceedings of the Cambridge Philosophical Society, 45, 536-544. Turner, M.E. (1960), On heuristic estimation methods, Biometrics, 16(2), 299-301. Zeckhauser, R. and Thompson, M. (1970), Linear regression with non-normal error terms, The Review of Economics and Statistics, 52, 280-286. McDonald, J.B. and Newey, W.K. (1988), Partially adaptive estimation of regression models via the generalized t distribution, Econometric Theory, 4, 428-457. Johnson, N. L., Kotz, S. and Balakrishnan, N. (1995), Continuous Univariate Distributions, volume 2, 2nd edition, Wiley: New York (1995, p.422) Mineo, A.M. and Ruggieri, M. (2005), A software tool for the Exponential Power distribution: the normalp package, Journal of Statistical Software, 12(4), 1-21. ... all before the paper referenced on Wiki. Aside from being 80 years out of date, the name used on Wiki 'a Generalised Normal' also seems inappropriate because there are an infinity of distributions that are generalisations of the Normal, and the name is, in any event, ambiguous to the literature. It also fails to acknowledge the original author.
{ "source": [ "https://stats.stackexchange.com/questions/201038", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/22311/" ] }
201,146
My question in the title is self explanatory, but I would like to give it some context. The ASA released a statement earlier this week β€œ on p-values: context, process, and purpose ”, outlining various common misconceptions of the p-value, and urging caution in not using it without context and thought (which could be said just about any statistical method, really). In response to the ASA, professor Matloff wrote a blog post titled: After 150 Years, the ASA Says No to p-values . Then professor Benjamini (and I) wrote a response post titled It’s not the p-values’ fault – reflections on the recent ASA statement . In response to it professor Matloff asked in a followup post : What I would like to see [... is] β€” a good, convincing example in which p-values are useful. That really has to be the bottom line. To quote his two major arguments against the usefulness of the $p$-value: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. Almost no null hypotheses are true in the real world, so performing a significance test on them is absurd and bizarre. I am very interested in what other crossvalidated community members think of this question/arguments, and of what may constitute a good response to it.
I will consider both Matloff's points: With large samples, significance tests pounce on tiny, unimportant departures from the null hypothesis. The logic here is that if somebody reports highly significant $p=0.0001$, then from this number alone we cannot say if the effect is large and important or irrelevantly tiny (as can happen with large $n$). I find this argument strange and cannot connect to it at all, because I have never seen a study that would report a $p$-value without reporting [some equivalent of] effect size. Studies that I read would e.g. say (and usually show on a figure) that group A had such and such mean, group B had such and such mean and they were significantly different with such and such $p$-value. I can obviously judge for myself if the difference between A and B is large or small. (In the comments, @RobinEkman pointed me to several highly-cited studies by Ziliak & McCloskey ( 1996 , 2004 ) who observed that the majority of the economics papers trumpet "statistical significance" of some effects without paying much attention to the effect size and its "practical significance" (which, Z&MS argue, can often be minuscule). This is clearly bad practice. However, as @MatteoS explained below, the effect sizes (regression estimates) are always reported, so my argument stands.) Almost no null hypotheses are true in the real world, so performing a significance test on them is absurd and bizarre. This concern is also often voiced, but here again I cannot really connect to it. It is important to realize that researchers do not increase their $n$ ad infinitum . In the branch of neuroscience that I am familiar with, people will do experiments with $n=20$ or maybe $n=50$, say, rats. If there is no effect to be seen then the conclusion is that the effect is not large enough to be interesting. Nobody I know would go on breeding, training, recording, and sacrificing $n=5000$ rats to show that there is some statistically significant but tiny effect. And whereas it might be true that almost no real effects are exactly zero, it is certainly true that many many real effects are small enough to be detected with reasonable sample sizes that reasonable researchers are actually using, exercising their good judgment. (There is a valid concern that sample sizes are often not big enough and that many studies are underpowered. So perhaps researchers in many fields should rather aim at, say, $n=100$ instead of $n=20$. Still, whatever the sample size is, it puts a limit on the effect size that the study has power to detect.) In addition, I do not think I agree that almost no null hypotheses are true, at least not in the experimental randomized studies (as opposed to observational ones). Two reasons: Very often there is a directionality to the prediction that is being tested; researcher aims to demonstrate that some effect is positive $\delta>0$. By convention this is usually done with a two-sided test assuming a point null $H_0: \delta=0$ but in fact this is rather a one-sided test trying to reject $H_0: \delta<0$. (@CliffAB's answer, +1, makes a related point.) And this can certainly be true. Even talking about the point "nil" null $H_0: \delta=0$, I do not see why they are never true. Some things are just not causally related to other things. Look at the psychology studies that are failing to replicate in the last years: people feeling the future; women dressing in red when ovulating; priming with old-age-related words affecting walking speed; etc. It might very well be that there are no causal links here at all and so the true effects are exactly zero. Himself, Norm Matloff suggests to use confidence intervals instead of $p$-values because they show the effect size. Confidence intervals are good, but notice one disadvantage of a confidence interval as compared to the $p$-value: confidence interval is reported for one particular coverage value, e.g. $95\%$. Seeing a $95\%$ confidence interval does not tell me how broad a $99\%$ confidence interval would be. But one single $p$-value can be compared with any $\alpha$ and different readers can have different alphas in mind. In other words, I think that for somebody who likes to use confidence intervals, a $p$-value is a useful and meaningful additional statistic to report. I would like to give a long quote about the practical usefulness of $p$-values from my favorite blogger Scott Alexander; he is not a statistician (he is a psychiatrist) but has lots of experience with reading psychological/medical literature and scrutinizing the statistics therein. The quote is from his blog post on the fake chocolate study which I highly recommend. Emphasis mine. [...] But suppose we're not allowed to do $p$-values. All I do is tell you "Yeah, there was a study with fifteen people that found chocolate helped with insulin resistance" and you laugh in my face. Effect size is supposed to help with that. But suppose I tell you "There was a study with fifteen people that found chocolate helped with insulin resistance. The effect size was $0.6$." I don't have any intuition at all for whether or not that's consistent with random noise. Do you? Okay, then they say we’re supposed to report confidence intervals. The effect size was $0.6$, with $95\%$ confidence interval of $[0.2, 1.0]$. Okay. So I check the lower bound of the confidence interval, I see it’s different from zero. But now I’m not transcending the $p$-value. I’m just using the p-value by doing a sort of kludgy calculation of it myself – β€œ$95\%$ confidence interval does not include zero” is the same as β€œ$p$-value is less than $0.05$”. (Imagine that, although I know the $95\%$ confidence interval doesn’t include zero, I start wondering if the $99\%$ confidence interval does. If only there were some statistic that would give me this information!) But wouldn’t getting rid of $p$-values prevent β€œ$p$-hacking”? Maybe, but it would just give way to β€œd-hacking”. You don’t think you could test for twenty different metabolic parameters and only report the one with the highest effect size? The only difference would be that p-hacking is completely transparent – if you do twenty tests and report a $p$ of $0.05$, I know you’re an idiot – but d-hacking would be inscrutable. If you do twenty tests and report that one of them got a $d = 0.6$, is that impressive? [...] But wouldn’t switching from $p$-values to effect sizes prevent people from making a big deal about tiny effects that are nevertheless statistically significant? Yes, but sometimes we want to make a big deal about tiny effects that are nevertheless statistically significant! Suppose that Coca-Cola is testing a new product additive, and finds in large epidemiological studies that it causes one extra death per hundred thousand people per year. That’s an effect size of approximately zero, but it might still be statistically significant. And since about a billion people worldwide drink Coke each year, that’s a ten thousand deaths. If Coke said β€œNope, effect size too small, not worth thinking about”, they would kill almost two milli-Hitlers worth of people. For some further discussion of various alternatives to $p$-values (including Bayesian ones), see my answer in ASA discusses limitations of $p$-values - what are the alternatives?
{ "source": [ "https://stats.stackexchange.com/questions/201146", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/253/" ] }
201,569
What is the difference between dropout and drop connect? AFAIK, dropout randomly drops hidden nodes during training but keeps them in testing, and drop connect drops connections. But isn't dropping connections equivalent to dropping the hidden nodes? Aren't the nodes (or connections) just a set of weights?
DropOut and DropConnect are both methods intended to prevent "co-adaptation" of units in a neural network. In other words, we want units to independently extract features from their inputs instead of relying on other neurons to do so. Suppose we have a multilayered feedforward network like this one (the topology doesn't really matter). We're worried about the yellow hidden units in the middle layer co-adapting. DropOut To apply DropOut, we randomly select a subset of the units and clamp their output to zero, regardless of the input; this effectively removes those units from the model. A different subset of units is randomly selected every time we present a training example. Below are two possible network configurations. On the first presentation (left), the 1st and 3rd units are disabled, but the 2nd and 3rd units have been randomly selected on a subsequent presentation. At test time, we use the complete network but rescale the weights to compensate for the fact that all of them can now become active (e.g., if you drop half of the nodes, the weights should also be halved). DropConnect DropConnect works similarly, except that we disable individual weights (i.e., set them to zero), instead of nodes, so a node can remain partially active. Schematically, it looks like this: Comparison These methods both work because they effectively let you train several models at the same time, then average across them for testing. For example, the yellow layer has four nodes, and thus 16 possible DropOut states (all enabled, #1 disabled, #1 and #2 disabled, etc). DropConnect is a generalization of DropOut because it produces even more possible models, since there are almost always more connections than units. However, you can get similar outcomes on an individual trial. For example, the DropConnect network on the right has effectively dropped Unit #2 since all of the incoming connections have been removed. Further Reading The original papers are pretty accessible and contain more details and empirical results. DropOut: Hinton et al., 2012 , Srivasta et al., 2014; JMLR DropConnect: Wan et al., 2013
{ "source": [ "https://stats.stackexchange.com/questions/201569", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/101559/" ] }
201,909
Under what circumstances should the data be normalized/standardized when building a regression model. When i asked this question to a stats major, he gave me an ambiguous answer "depends on the data". But what does that really mean? It should either be an universal rule or a check list of sorts where if certain conditions are met then the data either should/ shouldn't be normalized.
Sometimes standardization helps for numerical issues (not so much these days with modern numerical linear algebra routines) or for interpretation, as mentioned in the other answer. Here is one "rule" that I will use for answering the answer myself: Is the regression method you are using invariant , in that the substantive answer does not change with standardization? Ordinary least squares is invariant, while methods such as lasso or ridge regression are not. So, for invariant methods there is no real need for standardization, while for non-invariant methods you should probably standardize. (Or at least think it through). The following is somewhat related: Dropping one of the columns when using one-hot encoding
{ "source": [ "https://stats.stackexchange.com/questions/201909", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/98128/" ] }
202,348
Is there a branch of statistics that deals with data for which exact values are not known , but for each individual, we know either a maximum or minimum bound to the value ? I suspect that my problem stems largely from the fact that I am struggling to articulate it in statistical terms, but hopefully an example will help to clarify: Say there are two connected populations $A$ and $B$ such that, at some point, members of $A$ may "transition" into $B$ , but the reverse is not possible. The timing of the transition is variable, but non-random. For example, $A$ could be "individuals without offspring" and $B$ "individuals with at least one offspring". I am interested in the age this progression occurs but I only have cross-sectional data. For any given individual, I can find out if they belong to $A$ or $B$. I also know the age of these individuals. For each individual in population $A$, I know that the age at transition will be GREATER THAN their current age. Likewise, for members of $B$, I know that the age at transition was LESS THAN their current age. But I don't know the exact values. Say I have some other factor that I want to compare with the age of transition. For example, I want to know whether an individual's subspecies or body size affects the age of first offspring. I definitely have some useful information that should inform those questions: on average, of the individuals in $A$, older individuals will have a later transition. But the information is imperfect , particularly for younger individuals. And vice versa for population $B$. Are there established methods to deal with this sort of data ? I do not necessarily need a full method of how to carry out such an analysis, just some search terms or useful resources to start me off in the right place! Caveats: I am making the simplifying assumption that transition from $A$ to $B$ is instantaneous. I am also prepared to assume that most individuals will at some point progress to $B$, assuming they live long enough. And I realise that longitutinal data would be very helpful, but assume that it is not available in this case. Apologies if this is a duplicate, as I said, part of my problem is that I don't know what I should be searching for. For the same reason, please add other tags if appropriate. Sample dataset: Ssp indicates one of two subspecies, $X$ or $Y$. Offspring indicates either no offspring ($A$) or at least one offspring ($B$) age ssp offsp 21 Y A 20 Y B 26 X B 33 X B 33 X A 24 X B 34 Y B 22 Y B 10 Y B 20 Y A 44 X B 18 Y A 11 Y B 27 X A 31 X B 14 Y B 41 X B 15 Y A 33 X B 24 X B 11 Y A 28 X A 22 X B 16 Y A 16 Y B 24 Y B 20 Y B 18 X B 21 Y B 16 Y B 24 Y A 39 X B 13 Y A 10 Y B 18 Y A 16 Y A 21 X A 26 X B 11 Y A 40 X B 8 Y A 41 X B 29 X B 53 X B 34 X B 34 X B 15 Y A 40 X B 30 X A 40 X B Edit: example dataset changed as it wasn't very representative
This is referred to as current status data . You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: transitioning from A to B) has happened or not. This is a special case of interval censoring . To formally define it, let $T_i$ be the (unobserved) true event time for subject $i$. Let $C_i$ the inspection time for subject $i$ (in your case: age at inspection). If $C_i < T_i$, the data are right censored . Otherwise, the data are left censored . We are interesting in modeling the distribution of $T$. For regression models, we are interested in modeling how that distribution changes with a set of covariates $X$. To analyze this using interval censoring methods, you want to put your data into the general interval censoring format. That is, for each subject, we have the interval $(l_i, r_i)$, which represents the interval in which we know $T_i$ to be contained. So if subject $i$ is right censored at inspection time $c_i$, we would write $(c_i, \infty)$. If it is left censored at $c_i$, we would represent it as $(0, c_i)$. Shameless plug: if you want to use regression models to analyze your data, this can be done in R using icenReg (I'm the author). In fact, in a similar question about current status data , the OP put up a nice demo of using icenReg . He starts by showing that ignoring the censoring part and using logistic regression leads to bias (important note: he is referring to using logistic regression without adjusting for age . More on this later.) Another great package is interval , which contains log-rank statistic tests, among other tools. EDIT: @EdM suggested using logistic regression to answer the problem. I was unfairly dismissive of this, saying that you would have to worry about the functional form of time. While I stand behind the statement that you should worry about the functional form of time, I realized that there was a very reasonable transformation that leads to a reasonable parametric estimator. In particular, if we use log(time) as a covariate in our model with logistic regression, we end up with a proportional odds model with a log-logistic baseline. To see this, first consider that the proportional odds regression model is defined as $\text{Odds}(t|X, \beta) = e^{X^T \beta} \text{Odds}_o(t)$ where $\text{Odds}_o(t)$ is the baseline odds of survival at time $t$. Note that the regression effects are the same as with logistic regression. So all we need to do now is show that the baseline distribution is log-logistic. Now consider a logistic regression with log(Time) as a covariate. We then have $P(Y = 1 | T = t) = \frac{\exp(\beta_0 + \beta_1 \log(t))}{1 + \exp(\beta_0 + \beta_1\log(t))}$ With a little work, you can see this as the CDF of a log-logistic model (with a non-linear transformation of the parameters). R demonstration that the fits are equivalent: > library(icenReg) > data(miceData) > > ## miceData contains current status data about presence > ## of tumors at sacrifice in two groups > ## in interval censored format: > ## l = lower end of interval, u = upper end > ## first three mice all left censored > > head(miceData, 3) l u grp 1 0 381 ce 2 0 477 ce 3 0 485 ce > > ## To fit this with logistic regression, > ## we need to extract age at sacrifice > ## if the observation is left censored, > ## this is the upper end of the interval > ## if right censored, is the lower end of interval > > age <- numeric() > isLeftCensored <- miceData$l == 0 > age[isLeftCensored] <- miceData$u[isLeftCensored] > age[!isLeftCensored] <- miceData$l[!isLeftCensored] > > log_age <- log(age) > resp <- !isLeftCensored > > > ## Fitting logistic regression model > logReg_fit <- glm(resp ~ log_age + grp, + data = miceData, family = binomial) > > ## Fitting proportional odds regression model with log-logistic baseline > ## interval censored model > ic_fit <- ic_par(cbind(l,u) ~ grp, + model = 'po', dist = 'loglogistic', data = miceData) > > summary(logReg_fit) Call: glm(formula = resp ~ log_age + grp, family = binomial, data = miceData) Deviance Residuals: Min 1Q Median 3Q Max -2.1413 -0.8052 0.5712 0.8778 1.8767 Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 18.3526 6.7149 2.733 0.00627 ** log_age -2.7203 1.0414 -2.612 0.00900 ** grpge -1.1721 0.4713 -2.487 0.01288 * --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Dispersion parameter for binomial family taken to be 1) Null deviance: 196.84 on 143 degrees of freedom Residual deviance: 160.61 on 141 degrees of freedom AIC: 166.61 Number of Fisher Scoring iterations: 5 > summary(ic_fit) Model: Proportional Odds Baseline: loglogistic Call: ic_par(formula = cbind(l, u) ~ grp, data = miceData, model = "po", dist = "loglogistic") Estimate Exp(Est) Std.Error z-value p log_alpha 6.603 737.2000 0.07747 85.240 0.000000 log_beta 1.001 2.7200 0.38280 2.614 0.008943 grpge -1.172 0.3097 0.47130 -2.487 0.012880 final llk = -80.30575 Iterations = 10 > > ## Comparing loglikelihoods > logReg_fit$deviance/(-2) - ic_fit$llk [1] 2.643219e-12 Note that the effect of grp is the same in each model, and the final log-likelihood differs only by numeric error. The baseline parameters (i.e. intercept and log_age for logistic regression, alpha and beta for the interval censored model) are different parameterizations so they are not equal. So there you have it: using logistic regression is equivalent to fitting the proportional odds with a log-logistic baseline distribution. If you're okay with fitting this parametric model, logistic regression is quite reasonable. I do caution that with interval censored data, semi-parametric models are typically favored due to difficulty of assessing model fit, but if I truly thought there was no place for fully-parametric models I would have not included them in icenReg .
{ "source": [ "https://stats.stackexchange.com/questions/202348", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/109013/" ] }
202,879
Statistics is everywhere; common usage of statistical terms is, however, often unclear. The terms probability and odds are used interchangeable in lay English despite their well-defined and different mathematical expressions. Not separating the term likelihood from probability routinely confuses physicians trying to quantify the probability of breast cancer given a positive mammography, β€œOh, what nonsense. I can’t do this. You should test my daughter; she is studying medicine.” Equally spread is the use of correlation instead of association . Or correlation implying causation . In Al Gore's famous documentary An Inconvenient Truth , a slide illustrates the correlation of ice core $\small \text{CO}_2$ and temperatures, leaving the more technical work to prove causation out of the discussion: QUESTION: Which statistical terms pose interpretation problems when used without mathematical rigor, and are, therefore, worth correcting?
It can be futile to fight against shifts in language. But parameter does not mean variable In classical statistics, which in this case starts precisely with R.A. Fisher who first used the term with this meaning, a parameter is an unknown constant to be estimated, say a population mean or correlation. In mathematics, there are related but not identical meanings, as when a curve is given parametrically. In many sciences, parameter is just another word for a measure (itself a term dense with mathematical meaning), property or variable, say length or conductivity or porosity or virtue, as the case may be. Naturally, an individual's length or virtue is unknown before it is measured. but statistically minded people can be bemused by its use for a set of such measurements. In ordinary or vulgar parlance, parameters (almost always plural) often mean the limits of something, say a personal relationship or a political policy, perhaps stemming from some original confusion with perimeter . With high prior probability it is to be presumed that Bayesians will speak for themselves on their own usages (grateful nod to @conjugateprior). skewed does not mean biased For a century or more, skewness has had a specific statistical sense of referring to asymmetry of distributions, whether assessed graphically, measured numerically, or presumed theoretically as a matter of faith or hope. For much longer, or so it may be guessed, bias has meant being wrong on average, which -- so long as we know the truth, meaning a true or correct value -- can be quantified as systematic error. Skewed in ordinary language has a common sense of being warped or distorted, and thus of being incorrect, wrong and so also biased too. That sense (so far as I have noticed, only quite recently) has begun filtering back into statistical discussions, so that the original meaning of skewness is in some danger of being blurred or submerged. correlation does not mean agreement Correlation has attracted several precise senses in statistics, which have in common an idea of a bivariate relationship perfect in some precise sense: the leading cases are linear and monotone relationship. It is often diluted, even in statistical discussions, to mean almost any kind of relationship or association. What correlation does not mean, necessarily, is agreement: thus $y = a + bx$ implies Pearson correlation of $1$ or $-1$ so long as $b \ne 0$ , but agreement $y = x$ requires the very strict condition $a =0, b= 1$ . unique does not mean distinct It is quite common to talk about the distinct values of data as unique , but unique is still ideally better preserved as meaning occurring once only. My own guess is that some of the blame stems from the Unix [sic] utility uniq and its imitators, which reduce possibly repeated values to a set in which each value really is unique. The usage, on this guess, conflates input and output of a program. (Conversely, if we talk of duplicates in data, we rarely restrict ourselves to doubletons that occur precisely twice. The term replicates would make more sense linguistically but has been pre-empted for deliberate replication of controls in experiments; the resulting response values are usually not at all identical, which is much of the point.) samples are rarely repeated In statistics, a sample includes several values, and repeated sampling is a high theoretical virtue, but one rarely practised, except by simulation, which is our customary term for any kind of faking in silico . In many sciences, a sample is a single object, consisting of a lump, chunk or dollop of water, soil, sediment, rock, blood, tissue, or other substances varying from attractive through benign to disgusting; far from being exceptional, taking many samples may be essential for any serious analysis. Here every field's terminology makes perfect sense to its people, but translation is sometimes needed. error does not usually mean mistake; as Harold Jeffreys pointed out, the primary sense is erratic, not erroneous. Nevertheless, we should be wary of our own sins or quirks of terminology: expected values or expectations (for means over the possible outcomes) may not be what you expect at all, and could even be impossible: in tossing a die fairly with outcomes 1 to 6, the expected value is 3.5 regression is not going backwards stationary does not mean immobile or fixed confidence has nothing to do with anyone's mental or psychological state significance has only sometimes its everyday meaning exact is often an honorific term, referring to a conveniently tractable solution or calculation rather than one appropriate to the problem right-skewed distributions to many look skewed left, and vice versa (and the terminology of right and left for skewness assumes that you are looking at something like a conventional histogram, with horizontal magnitude axis) the lognormal is so called because it's an exponentiated normal but the lognormal is more normal than the normal the Gaussian was discovered by De Moivre Poisson didn't discover the Poisson , let alone Poisson regression the bootstrap won't help you with your footwear the jackknife doesn't cut kurtosis is not a medical condition stem-and-leaf plots don't refer to plants a dummy variable is useful, not pointless or stupid who on Earth (or anywhere else) thinks that heteroscedasticity is really a preferable term over unequal variability ? robust now has at least two major technical meanings for different groups, neither of which inhibits its frequent use, even in technical discussions, to mean merely something like "asserted to behave well" IV now has at least two major meanings for different groups factor now has at least two major meanings for different groups normalize and standardize have uncountably many meanings (we really need to standardize there) versus describing a graph means vertical variable versus horizontal variable , unless it means the opposite and (last but not least, to coin a phrase) statistics has at least three major meanings. Notes: Despite any appearances to the contrary, I think this is a good, serious question. Fashions shift. Well into the twentieth century, it seems that many people (no names, no pack-drill, but Karl Pearson could be mentioned) could only invent terms by reaching for their Greek and Latin dictionaries. (It would be unfair not to give him credit for scatter plot .) But R.A. Fisher did hijack many pre-existing English words, including variance , sufficiency , efficiency and likelihood . More recently, J.W. Tukey was a master in using homely terms, but few should feel distress that sploms and badmandments did not catch on. One comment is based on recollection of "Life is [...] Multiplicative rather than additive: the log normal distribution is more normal than the normal." Anon. 1962. Bloggins's working rules. In Good, I.J. (Ed.) The scientist speculates: an anthology of partly-baked ideas. London: Heinemann, 212-213 (quotation on p.213).
{ "source": [ "https://stats.stackexchange.com/questions/202879", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/67822/" ] }
202,891
I'm a student interested in learning about probabilistic networks β€” Bayesian Networks Markov models etc. I have a good background in probability, statistics and Markov chains. Some good introductory books I found are the one by Jensen and Nielsen "Bayesian Networks and Decision Graphs", Springer and "Probabilistic Networks and Expert Systems" by Cowell et al., Springer. Which one (of the above or not) would you recommend as a first read?
It can be futile to fight against shifts in language. But parameter does not mean variable In classical statistics, which in this case starts precisely with R.A. Fisher who first used the term with this meaning, a parameter is an unknown constant to be estimated, say a population mean or correlation. In mathematics, there are related but not identical meanings, as when a curve is given parametrically. In many sciences, parameter is just another word for a measure (itself a term dense with mathematical meaning), property or variable, say length or conductivity or porosity or virtue, as the case may be. Naturally, an individual's length or virtue is unknown before it is measured. but statistically minded people can be bemused by its use for a set of such measurements. In ordinary or vulgar parlance, parameters (almost always plural) often mean the limits of something, say a personal relationship or a political policy, perhaps stemming from some original confusion with perimeter . With high prior probability it is to be presumed that Bayesians will speak for themselves on their own usages (grateful nod to @conjugateprior). skewed does not mean biased For a century or more, skewness has had a specific statistical sense of referring to asymmetry of distributions, whether assessed graphically, measured numerically, or presumed theoretically as a matter of faith or hope. For much longer, or so it may be guessed, bias has meant being wrong on average, which -- so long as we know the truth, meaning a true or correct value -- can be quantified as systematic error. Skewed in ordinary language has a common sense of being warped or distorted, and thus of being incorrect, wrong and so also biased too. That sense (so far as I have noticed, only quite recently) has begun filtering back into statistical discussions, so that the original meaning of skewness is in some danger of being blurred or submerged. correlation does not mean agreement Correlation has attracted several precise senses in statistics, which have in common an idea of a bivariate relationship perfect in some precise sense: the leading cases are linear and monotone relationship. It is often diluted, even in statistical discussions, to mean almost any kind of relationship or association. What correlation does not mean, necessarily, is agreement: thus $y = a + bx$ implies Pearson correlation of $1$ or $-1$ so long as $b \ne 0$ , but agreement $y = x$ requires the very strict condition $a =0, b= 1$ . unique does not mean distinct It is quite common to talk about the distinct values of data as unique , but unique is still ideally better preserved as meaning occurring once only. My own guess is that some of the blame stems from the Unix [sic] utility uniq and its imitators, which reduce possibly repeated values to a set in which each value really is unique. The usage, on this guess, conflates input and output of a program. (Conversely, if we talk of duplicates in data, we rarely restrict ourselves to doubletons that occur precisely twice. The term replicates would make more sense linguistically but has been pre-empted for deliberate replication of controls in experiments; the resulting response values are usually not at all identical, which is much of the point.) samples are rarely repeated In statistics, a sample includes several values, and repeated sampling is a high theoretical virtue, but one rarely practised, except by simulation, which is our customary term for any kind of faking in silico . In many sciences, a sample is a single object, consisting of a lump, chunk or dollop of water, soil, sediment, rock, blood, tissue, or other substances varying from attractive through benign to disgusting; far from being exceptional, taking many samples may be essential for any serious analysis. Here every field's terminology makes perfect sense to its people, but translation is sometimes needed. error does not usually mean mistake; as Harold Jeffreys pointed out, the primary sense is erratic, not erroneous. Nevertheless, we should be wary of our own sins or quirks of terminology: expected values or expectations (for means over the possible outcomes) may not be what you expect at all, and could even be impossible: in tossing a die fairly with outcomes 1 to 6, the expected value is 3.5 regression is not going backwards stationary does not mean immobile or fixed confidence has nothing to do with anyone's mental or psychological state significance has only sometimes its everyday meaning exact is often an honorific term, referring to a conveniently tractable solution or calculation rather than one appropriate to the problem right-skewed distributions to many look skewed left, and vice versa (and the terminology of right and left for skewness assumes that you are looking at something like a conventional histogram, with horizontal magnitude axis) the lognormal is so called because it's an exponentiated normal but the lognormal is more normal than the normal the Gaussian was discovered by De Moivre Poisson didn't discover the Poisson , let alone Poisson regression the bootstrap won't help you with your footwear the jackknife doesn't cut kurtosis is not a medical condition stem-and-leaf plots don't refer to plants a dummy variable is useful, not pointless or stupid who on Earth (or anywhere else) thinks that heteroscedasticity is really a preferable term over unequal variability ? robust now has at least two major technical meanings for different groups, neither of which inhibits its frequent use, even in technical discussions, to mean merely something like "asserted to behave well" IV now has at least two major meanings for different groups factor now has at least two major meanings for different groups normalize and standardize have uncountably many meanings (we really need to standardize there) versus describing a graph means vertical variable versus horizontal variable , unless it means the opposite and (last but not least, to coin a phrase) statistics has at least three major meanings. Notes: Despite any appearances to the contrary, I think this is a good, serious question. Fashions shift. Well into the twentieth century, it seems that many people (no names, no pack-drill, but Karl Pearson could be mentioned) could only invent terms by reaching for their Greek and Latin dictionaries. (It would be unfair not to give him credit for scatter plot .) But R.A. Fisher did hijack many pre-existing English words, including variance , sufficiency , efficiency and likelihood . More recently, J.W. Tukey was a master in using homely terms, but few should feel distress that sploms and badmandments did not catch on. One comment is based on recollection of "Life is [...] Multiplicative rather than additive: the log normal distribution is more normal than the normal." Anon. 1962. Bloggins's working rules. In Good, I.J. (Ed.) The scientist speculates: an anthology of partly-baked ideas. London: Heinemann, 212-213 (quotation on p.213).
{ "source": [ "https://stats.stackexchange.com/questions/202891", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/-1/" ] }
203,629
I am looking for a distribution where the probability density decreases quickly after some point away from the mean, or in my own words a "plateau-shaped distribution". Something in between the Gaussian and the uniform.
You may be looking for distribution known under the names of generalized normal (version 1) , Subbotin distribution , or exponential power distribution. It is parametrized by location $\mu$, scale $\sigma$ and shape $\beta$ with pdf $$ \frac{\beta}{2\sigma\Gamma(1/\beta)} \exp\left[-\left(\frac{|x-\mu|}{\sigma}\right)^{\beta}\right] $$ as you can notice, for $\beta=1$ it resembles and converges to Laplace distribution, with $\beta=2$ it converges to normal, and when $\beta = \infty$ to uniform distribution. If you are looking for software that has it implemented, you can check normalp library for R (Mineo and Ruggieri, 2005). What is nice about this package is that, among other things, it implements regression with generalized normally distributed errors, i.e. minimizing $L_p$ norm. Mineo, A. M., & Ruggieri, M. (2005). A software tool for the exponential power distribution: The normalp package. Journal of Statistical Software, 12(4), 1-24.
{ "source": [ "https://stats.stackexchange.com/questions/203629", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/95569/" ] }
203,740
I am trying to understand why the output from logistic regression of these two libraries gives different results. I am using the dataset from UCLA idre tutorial , predicting admit based on gre , gpa and rank . rank is treated as categorical variable, so it is first converted to dummy variable with rank_1 dropped. An intercept column is also added. py from patsy import dmatrices from sklearn.linear_model import LogisticRegression import pandas as pd import statsmodels.api as sm df = pd.read_csv("https://stats.idre.ucla.edu/stat/data/binary.csv") y, X = dmatrices('admit ~ gre + gpa + C(rank)', df, return_type = 'dataframe') X.head() > Intercept C(rank)[T.2] C(rank)[T.3] C(rank)[T.4] gre gpa 0 1 0 1 0 380 3.61 1 1 0 1 0 660 3.67 2 1 0 0 0 800 4.00 3 1 0 0 1 640 3.19 4 1 0 0 1 520 2.93 # Output from scikit-learn model = LogisticRegression(fit_intercept = False) mdl = model.fit(X, y) model.coef_ > array([[-1.35417783, -0.71628751, -1.26038726, -1.49762706, 0.00169198, 0.13992661]]) # corresponding to predictors [Intercept, rank_2, rank_3, rank_4, gre, gpa] # Output from statsmodels logit = sm.Logit(y, X) logit.fit().params > Optimization terminated successfully. Current function value: 0.573147 Iterations 6 Intercept -3.989979 C(rank)[T.2] -0.675443 C(rank)[T.3] -1.340204 C(rank)[T.4] -1.551464 gre 0.002264 gpa 0.804038 dtype: float64 The output from statsmodels is the same as shown on the idre website, but I am not sure why scikit-learn produces a different set of coefficients. Does it minimize some different loss function? Is there any documentation that states the implementation?
Your clue to figuring this out should be that the parameter estimates from the scikit-learn estimation are uniformly smaller in magnitude than the statsmodels counterpart. This might lead you to believe that scikit-learn applies some kind of parameter regularization. You can confirm this by reading the scikit-learn documentation . There is no way to switch off regularization in scikit-learn, but you can make it ineffective by setting the tuning parameter C to a large number. Here is how that works in your case: # module imports from patsy import dmatrices import pandas as pd from sklearn.linear_model import LogisticRegression import statsmodels.discrete.discrete_model as sm # read in the data & create matrices df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv") y, X = dmatrices('admit ~ gre + gpa + C(rank)', df, return_type = 'dataframe') # sklearn output model = LogisticRegression(fit_intercept = False, C = 1e9) mdl = model.fit(X, y) model.coef_ # sm logit = sm.Logit(y, X) logit.fit().params UPDATE : As correctly pointed out in the comments below, now you can switch off the relularization in scikit-learn by setting penalty='none' (see the docs ).
{ "source": [ "https://stats.stackexchange.com/questions/203740", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/109887/" ] }
203,745
I am conducting a meta-analysis of diagnostic test accuracy studies focusing on myocardial perfusion imaging. I have used first Meta-Disc , but only for descriptive purposes, as it is clear that univariate approaches such as those provided by this package are biased (eg Takwoingi et al ). Then, I have found the following bivariate methods, and used several of them: Bayesian bivariate model using bamdit in R; Bayesian bivariate model using meta4diag in R; Bayesian HSROC using HSROC in R; frequentist bivariate model using metamisc in R; frequentist bivariate model using metandi in Stata; frequentist copula mixed model using CopulaREMADA in R; frequentist hierarchical summary receiver operating characteristic (HSROC) model using metandi in Stata; frequentist proportional hazard model using mada in R; frequentist Reitsma model using mada in R; frequentist Reitsma model using Metatron in R. Results are similar across many of these methods, albeit obviously not identical. Yet, I would favor the Reitsma model as available in mada as it gives me more comprehensive analytical and graphical results. My questions stem from this actual project but are quite more general. Is there a method which is best for meta-analysis of diagnostic test accuracy studies? Or are they more or less similar? Is there any other method not listed above which is better still?
Your clue to figuring this out should be that the parameter estimates from the scikit-learn estimation are uniformly smaller in magnitude than the statsmodels counterpart. This might lead you to believe that scikit-learn applies some kind of parameter regularization. You can confirm this by reading the scikit-learn documentation . There is no way to switch off regularization in scikit-learn, but you can make it ineffective by setting the tuning parameter C to a large number. Here is how that works in your case: # module imports from patsy import dmatrices import pandas as pd from sklearn.linear_model import LogisticRegression import statsmodels.discrete.discrete_model as sm # read in the data & create matrices df = pd.read_csv("http://www.ats.ucla.edu/stat/data/binary.csv") y, X = dmatrices('admit ~ gre + gpa + C(rank)', df, return_type = 'dataframe') # sklearn output model = LogisticRegression(fit_intercept = False, C = 1e9) mdl = model.fit(X, y) model.coef_ # sm logit = sm.Logit(y, X) logit.fit().params UPDATE : As correctly pointed out in the comments below, now you can switch off the relularization in scikit-learn by setting penalty='none' (see the docs ).
{ "source": [ "https://stats.stackexchange.com/questions/203745", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/107799/" ] }
204,088
Unless I'm mistaken, in a linear model, the distribution of the response is assumed to have a systematic component and a random component. The error term captures the random component. Therefore, if we assume that the error term is Normally distributed, doesn't that imply that the response is also Normally distributed? I think it does, but then statements such as the one below seem rather confusing: And you can see clearly that the only assumption of "normality" in this model is that the residuals (or "errors" $\epsilon_i$) should be normally distributed. There is no assumption about the distribution of the predictor $x_i$ or the response variable $y_i$ . Source: Predictors, responses and residuals: What really needs to be normally distributed?
The standard OLS model is $Y = X \beta + \varepsilon$ with $\varepsilon \sim \mathcal N(\vec 0, \sigma^2 I_n)$ for a fixed $X \in \mathbb R^{n \times p}$. This does indeed mean that $Y|\{X, \beta, \sigma^2\} \sim \mathcal N(X\beta, \sigma^2 I_n)$, although this is a consequence of our assumption on the distribution of $\varepsilon$, rather than actually being the assumption. Also keep in mind that I'm talking about the conditional distribution of $Y$, not the marginal distribution of $Y$. I'm focusing on the conditional distribution because I think that's what you're really asking about. I think the part that is confusing is that this doesn't mean that a histogram of $Y$ will look normal. We are saying that the entire vector $Y$ is a single draw from a multivariate normal distribution where each element has a potentially different mean $E(Y_i|X_i) = X_i^T\beta$. This is not the same as being an iid normal sample. The errors $\varepsilon$ actually are an iid sample so a histogram of them would look normal (and that's why we do a QQ plot of the residuals, not the response). Here's an example: suppose we are measuring height $H$ for a sample of 6th graders and 12th graders. Our model is $H_i = \beta_0 + \beta_1I(\text{12th grader}) + \varepsilon_i$ with $\varepsilon_i \sim \ \text{iid} \ \mathcal N(0, \sigma^2)$. If we look at a histogram of the $H_i$ we'll probably see a bimodal distribution, with one peak for 6th graders and one peak for 12th graders, but that doesn't represent a violation of our assumptions.
{ "source": [ "https://stats.stackexchange.com/questions/204088", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10028/" ] }
204,105
I am working on an assignment where we have to study the affect of gamma and C parameters on SVM with RBF kernel. I use python's sklearn library and grid search with 10 fold cross validation (with a test size of .2) to test different values of gamma. For each value of gamma I also compute mean-validation-score( the mean of 10 fold cv accuracy scores) and accuracy and AUC of the ROC curve on the training data. For a particular value i get a poor mean-validation-score of .5 but a 100% accuracy and AUC of 1. I am not able to interpret these values. I know 100% accuracy on training data indicates over-fitting, but I cannot explain/reconcile such a huge difference between AUC and cv score. Is the over-fitting so extreme that it causes the cv score to be really poor The different classifiers are shown below. The classifier # 5 is the one with cv score of .5 and accuracy of 1. I am not sure how to interpret the graph as it shows the entire graph colored as red, but still gives an accuracy of 1. But shouldn't the accuracy be around .5 since , half the samples (blue) are missclassified. . The relevant code is : def run(): X,y = create_data(1000,2) gamma_range = np.logspace(start=-15,stop= 15,num =4, base=2) gamma_range = np.append(gamma_range,[2**30,2**(-30)]) gamma_grid = dict(gamma=gamma_range) print(gamma_grid) C_range = np.logspace(start=-15,stop= 15,num =4, base=2) #C_range = np.append(C_range,[2**20,2**-20]) C_grid = dict(C=C_range) print(C_grid) plt.figure(figsize=(10, 7)) h=.2 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h),np.arange(y_min, y_max, h)) grid = GridSearchCV(SVC(kernel="rbf"), param_grid=gamma_grid, cv=10) grid.fit(X,y) grid_scores = grid.grid_scores_ print("The best parameters are %s with a score of %0.2f"% (grid.best_params_, grid.best_score_)) print(grid_scores) for i, score in enumerate(grid_scores): gamma = score.parameters['gamma'] clf = SVC(kernel="rbf",gamma=gamma) clf.fit(X,y) Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()]) Z = Z.reshape(xx.shape) y_predicted = clf.predict(X) print(score.cv_validation_scores) print("(%d) Ξ³=2^%d, C=%s, CV-Score = %.3f, accuracy=%.2f, AUC = %.3f" %(i+1,np.log2(gamma), "Default",score.mean_validation_score,accuracy_score(y,y_predicted),roc_auc_score(y,y_predicted))) # visualize decision function for these parameters plt.subplot(3, 4, i+1) plt.title("(%d) Ξ³=2^%d C=%s CV-Score = %.3f AUC = %.3f" % (i+1,np.log2(gamma), "Default",score.mean_validation_score,roc_auc_score(y,y_predicted)), size='medium') # visualize parameter's effect on decision function plt.pcolormesh(xx, yy, -Z, cmap=plt.cm.RdBu) plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.RdBu_r) plt.xticks(()) plt.yticks(()) plt.axis('tight')
The standard OLS model is $Y = X \beta + \varepsilon$ with $\varepsilon \sim \mathcal N(\vec 0, \sigma^2 I_n)$ for a fixed $X \in \mathbb R^{n \times p}$. This does indeed mean that $Y|\{X, \beta, \sigma^2\} \sim \mathcal N(X\beta, \sigma^2 I_n)$, although this is a consequence of our assumption on the distribution of $\varepsilon$, rather than actually being the assumption. Also keep in mind that I'm talking about the conditional distribution of $Y$, not the marginal distribution of $Y$. I'm focusing on the conditional distribution because I think that's what you're really asking about. I think the part that is confusing is that this doesn't mean that a histogram of $Y$ will look normal. We are saying that the entire vector $Y$ is a single draw from a multivariate normal distribution where each element has a potentially different mean $E(Y_i|X_i) = X_i^T\beta$. This is not the same as being an iid normal sample. The errors $\varepsilon$ actually are an iid sample so a histogram of them would look normal (and that's why we do a QQ plot of the residuals, not the response). Here's an example: suppose we are measuring height $H$ for a sample of 6th graders and 12th graders. Our model is $H_i = \beta_0 + \beta_1I(\text{12th grader}) + \varepsilon_i$ with $\varepsilon_i \sim \ \text{iid} \ \mathcal N(0, \sigma^2)$. If we look at a histogram of the $H_i$ we'll probably see a bimodal distribution, with one peak for 6th graders and one peak for 12th graders, but that doesn't represent a violation of our assumptions.
{ "source": [ "https://stats.stackexchange.com/questions/204105", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/110103/" ] }
204,115
I am reading the chapter on the bias-variance tradeoff in The elements of statistical learning and I don't understand the formula on page 29. Let the data arise from a model such that $$ Y = f(x)+\varepsilon$$ where $\varepsilon$ is random number with expected value $\hat{\varepsilon} = E[\epsilon]=0$ and Variance $E[(\varepsilon - \hat\varepsilon)^2]=E[\varepsilon^2]=\sigma^2$ . Let the expected value of error of the model be $$ E[(Y-f_k(x))^2] $$ where $f_k(x)$ is the prediction of $x$ of our learner $k$ . According to the book, the error is $$ \newcommand{\Bias}{\rm Bias} \newcommand{\Var}{\rm Var} E[(Y-f_k(x))^2]=\sigma^2+\Bias(f_k)^2+\Var(f_k(x)). $$ My question is: Why is the bias term not $0$ ? Developing the formula for the error I see: \begin{align} E[(Y-f_k(x))^2] &= \\ E[(f(x)+\varepsilon-f_k(x))^2] &= \\[8pt] E[(f(x)-f_k(x))^2] + \\ 2E[(f(x)-f_k(x))\varepsilon] + E[\varepsilon^2] &= \Var(f_k(x))+2E[(f(x)-f_k(x))\epsilon]+\sigma^2 \end{align} as $\varepsilon$ is an independent random number $2E[(f(x)-f_k(x))\varepsilon] = 2E[(f(x)-f_k(x))]E[\varepsilon]=0$ . Where I am wrong?
You are not wrong, but you made an error in one step since $E[(f(x)-f_k(x))^2] \ne Var(f_k(x))$. $E[(f(x)-f_k(x))^2]$ is $\text{MSE}(f_k(x)) = Var(f_k(x)) + \text{Bias}^2(f_k(x))$. \begin{align*} E[(Y-f_k(x))^2]& = E[(f(x)+\epsilon-f_k(x))^2] \\ &= E[(f(x)-f_k(x))^2]+2E[(f(x)-f_k(x))\epsilon]+E[\epsilon^2]\\ &= E\left[\left(f(x) - E(f_k(x)) + E(f_k(x))-f_k(x) \right)^2 \right] + 2E[(f(x)-f_k(x))\epsilon]+\sigma^2 \\ & = Var(f_k(x)) + \text{Bias}^2(f_k(x)) + \sigma^2. \end{align*} Note: $E[(f_k(x)-E(f_k(x)))(f(x)-E(f_k(x))] = E[f_k(x)-E(f_k(x))](f(x)-E(f_k(x))) = 0.$
{ "source": [ "https://stats.stackexchange.com/questions/204115", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/10017/" ] }
204,141
Is comparing features using F-regression the same as correlating features with the label individually and observing the $R^2$ value? I have often seen my colleagues use an F regression for feature selection in their machine learning pipeline from sklearn : sklearn.feature_selection.SelectKBest(score_func=sklearn.feature_selection.f_regression...)` Some please tell me - why does it give the same results as just correlating it with the label/depedendent variable? It is not clear to me the advantage of using F_regression in feature selection. Here's my code: I'm using the mtcars dataset from R : import pandas as pd import numpy as np from sklearn import feature_selection from sklearn.linear_model import LinearRegression #....load mtcars dataset into a pandas dataframe called "df", not shown here for conciseness # only using these numerical columns as features ['mpg', 'disp', 'drat', 'wt'] # using this column as the label: ['qsec'] model = feature_selection.SelectKBest(score_func=feature_selection.f_regression,\ k=4) results = model.fit(df[columns], df['qsec']) print results.scores_ print results.pvalues_ # Using just correlation coefficient: columns = ['mpg', 'disp', 'drat', 'wt'] for col in columns: lm = LinearRegression(fit_intercept=True) lm.fit(df[[col]], df['qsec']) print lm.score(df[[col]], df['qsec']) As suspected, the ranking of the features is exactly the same: scores using f_regression: [ 6.376702 6.95008354 0.25164249 0.94460378] scores using coefficient of determination: 0.175296320261 0.18809385182 0.00831830818303 0.0305256382746 As you can see, the second feature is ranked the highest, the first feature is second, the fourth feature is third, and the third feature is last, in both cases. Is there ever a case where the F_regression would give different results, or would rank the features differently in some way? EDIT: To summarize, I'd like to know if these two rankings of features ever give different results: 1) ranking features by their F-statistic when regressing them with the outcome individually (this is what sklearn does) AND, 2) ranking features by their R-squared value when regressing them with the outcome , again individually.
TL:DR There won't be a difference if F-regression just computes the F statistic and pick the best features. There might be a difference in the ranking, assuming F-regression does the following: Start with a constant model, $M_0$ Try all models $M_1$ consisting of just one feature and pick the best according to the F statistic Try all models $M_2$ consisting of $M_1$ plus one other feature and pick the best ... As the correlation will not be the same at each iteration. But you can still get this ranking by just computing the correlation at each step, so why does F-regression takes an additional step? It does two things: Feature selection: If you want to select the $k$ best features in a Machine learning pipeline, where you only care about accuracy and have measures to adjust under/overfitting, you might only care about the ranking and the additional computation is not useful. Test for significance: If you are trying to understand the effect of some variables on an output in a study, you might want to build a linear model, and only include the variables that are significantly improving your model, with respect to some $p$-value. Here, F-regression comes in handy. What is a F-test A F-test (Wikipedia) is a way of comparing the significance of the improvement of a model, with respect to the addition of new variables. You can use it when have a basic model $M_0$ and a more complicated model $M_1$, which contains all variables from $M_0$ and some more. The F-test tells you if $M_1$ is significantly better than $M_0$, with respect to a $p$-value. To do so, it uses the residual sum of squares as an error measure, and compares the reduction in error with the number of variables added, and the number of observation (more details on Wikipedia ). Adding variables, even if they are completely random, is expected to always help the model achieve lower error by adding another dimension. The goal is to figure out if the new features are really helpful or if they are random numbers but still help the model because they add a dimension. What does f_regression do Note that I am not familiar with the Scikit learn implementation, but lets try to figure out what f_regression is doing. The documentation states that the procedure is sequential. If the word sequential means the same as in other statistical packages, such as Matlab Sequential Feature Selection , here is how I would expect it to proceed: Start with a constant model, $M_0$ Try all models $M_1$ consisting of just one feature and pick the best according to the F statistic Try all models $M_2$ consisting of $M_1$ plus one other feature and pick the best ... For now, I think it is a close enough approximation to answer your question; is there a difference between the ranking of f_regression and ranking by correlation. If you were to start with constant model $M_0$ and try to find the best model with only one feature, $M_1$, you will select the same feature whether you use f_regression or your correlation based approach, as they are both a measure of linear dependency. But if you were to go from $M_0$ to $M_1$ and then to $M_2$, there would be a difference in your scoring. Assume you have three features, $x_1, x_2, x_3$, where both $x_1$ and $x_2$ are highly correlated with the output $y$, but also highly correlated with each other, while $x_3$ is only midly correlated with $y$. Your method of scoring would assign the best scores to $x_1$ and $x_2$, but the sequential method might not. In the first round, it would pick the best feature, say $x_1$, to create $M_1$. Then, it would evaluate both $x_2$ and $x_3$ for $M_2$. As $x_2$ is highly correlated with an already selected feature, most of the information it contains is already incorporated into the model, and therefore the procedure might select $x_3$. While it is less correlated to $y$, it is more correlated to the residuals , the part that $x_1$ does not already explain, than $x_2$. This is how the two procedure you propose are different. You can still emulate the same effect with your idea by building your model sequentially and measuring the difference in gain for each additional feature instead of comparing them to the constant model $M_0$ as you are doing now. The result would not be different from the f_regression results. The reason for this function to exists is to provide this sequential feature selection, and additionnaly converts the result to an F measure which you can use to judge significance. The goal of the F-test is to provide significance level. If you want to make sure the features your are including are significant with respect to your $p$-value, you use an F-test. If you just want to include the $k$ best features, you can use the correlation only. Additional material: Here is an introduction to the F-test you might find helpful
{ "source": [ "https://stats.stackexchange.com/questions/204141", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/119114/" ] }
204,154
The question I am struggling to understand how the prediction is kept within the $[0,1]$ interval when doing binary classification with Gradient Boosting. Assume we are working on a binary classification problem, and our objective function is the log loss, $-\sum y_i \log(H_m(x_i)) + (1-y_i) \log(1-H_m(x_i))$, where $y$ is the target variable $\in \{0,1\}$ and $H$ is our current model. When training the next weak learner $h_i$ such that our new model is $H_i = H_{i-1} + h_i$, what is the mechanism that is supposed to keep $H_i \in [0,1]$? Or, maybe a more relevant question, is there such a mechanism? More information on what I am doing I am trying to implement gradient boosting, using regression trees. What I do to avoid it is that a multiply $h_i$ by a factor $c \in [0,c_{\text{max}}]$, such that $H + c_{\text{max}}h$ does not go below zero or above one, and I select the $c$ in this range that minimizes the loss function. This brings the following problem: After some rounds, I have one point that is perfectly classified, and the best split available to push the classifier in the direction of the gradient wants to push this point above one, which I make sure does not happen by setting $c = 0$. Thus all next iteration will select the same split and the same $c = 0$. I tried common regularization practices Decreasing the learning rate by multiplying $c$ by $\mu = 0.01$. This just delays the problem. Subsampling the feature space, but some of the points are very easy to classify, they tick almost every box in the "is this a positive?" form, and almost every "good split" shows this behavior. I think this is not a problem of parameters, and there should be a more sound way to fix this. I am not discarding the possibility that my implementation is broken, but I have found nothing adressing this problem. What we are manipulating, in the context of the logistic loss, should be a probability, so how do we avoid it? My intuition would be to put the model we are building, $H$, in a sigmoid function such that it is bounded to $[0,1]$, and I guess that would work, but I want to know if there are other solutions. Since gradient boosting seems used succesfully in classification tasks, a "correct" (i.e., with justification) solution should exists.
I like to think of this in analogy with the case of linear models, and their extension to GLMs (generalized linear models). In a linear model, we fit a linear function to predict our response $$ \hat y = \beta_0 + \beta_1 x_1 + \cdots \beta_n x_n $$ To generalize to other situations, we introduce a link function, which transforms the linear part of the model onto the scale of the response (technically this is an inverse link, but I think it's easier to think of it this way, transforming the linear predictor into a response, than transforming the response into a linear predictor). For example, the logistic model uses the sigmoid (or logit) function $$ \hat y = \frac{1}{1 + \exp(-(\beta_0 + \beta_1 x_1 + \cdots \beta_n x_n))} $$ and poisson regression uses an exponential function $$ \hat y = \exp(\beta_0 + \beta_1 x_1 + \cdots \beta_n x_n) $$ To construct an analogy with gradient boosting, we replace the linear part of these models with the sum of the boosted trees. So, for example, the gaussian case (analogous with linear regression) becomes the well known $$ \hat y = \sum_i h_i $$ where $h_i$ is our sequence of weak learners. The binomial case is analogous to logistic regression (as you noted in your answer) $$ \hat y = \frac{1}{1 + \exp\left(-\sum_i h_i\right)} $$ and poisson boosting is analogous to poisson regression $$ \hat y = \exp\left(\sum_i h_i\right) $$ The question remains, how does one fit these boosted models when the link function is involved? For the gaussian case, where the link is the identity function, the often heard mantra of fitting weak learners to the residuals of the current working model works out, but this doesn't really generalize to the more complicated models. The trick is to write the loss function being minimized as a function of the linear part of the model (i.e. the $\sum_i \beta_i x_i$ part of the GLM formulation). For example, the binomial loss is usually encountered as $$ \sum_i y_i \log(p_i) + (1 - y_i)\log(1 - p_i) $$ Here, the loss is a function of $p_i$, the predicted values on the same scale as the response, and $p_i$ is a non-linear transformation of the linear predictor $L_i$. Instead, we can re-express this as a function of $L_i$, (in this case also known as the log odds) $$ \sum_i y_i L_i - \log(1 + \exp(L_i)) $$ Then we can take the gradient of this with respect to $L$, and boost to directly minimize this quantity. Only at the very end, when we want to produce predictions for the user, do we apply the link function to the final sequence of weak learners to put the predictions on the same scale as the response. While fitting the model, we internally work on the linear scale the entire time.
{ "source": [ "https://stats.stackexchange.com/questions/204154", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/101070/" ] }
204,484
Fig 1. Logistic Function Fig 2. Sigmoid Function is it more like generalized kind of sigmoid function where you could have a higher maximum value?
Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha) , you will see that $L$ is the maximum value the function can take. $e^{-k(x-x_0)}$ is always greater or equal than 0, so the maximum point is achieved when it it 0, and is at $L/1$. $x_0$ controls where on the $x$ axis the growth should the, because if you put $x_0$ in the function, $x_0 - x_0$ cancel out and $e^0 = 1$, so you end up with $f(x_0) = L/2$, the midpoint of the growth. the parameter $k$ controls how steep the change from the minimum to the maximum value is.
{ "source": [ "https://stats.stackexchange.com/questions/204484", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/110199/" ] }
204,741
I'm analyzing a data set using a mixed effects model with one fixed effect (condition) and two random effects (participant due to the within subject design and pair). The model was generated with the lme4 package: exp.model<-lmer(outcome~condition+(1|participant)+(1|pair),data=exp) . Next, I performed a likelihood ratio test of this model against the model without the fixed effect (condition) and have a significant difference. There are 3 conditions in my data set so I want to do a multiple comparison but I am not sure which method to use . I found a number of similar questions on CrossValidated and other forums but I am still quite confused. From what I've seen, people have suggested using 1. The lsmeans package - lsmeans(exp.model,pairwise~condition) which gives me the following output: condition lsmean SE df lower.CL upper.CL Condition1 0.6538060 0.03272705 47.98 0.5880030 0.7196089 Condition2 0.7027413 0.03272705 47.98 0.6369384 0.7685443 Condition3 0.7580522 0.03272705 47.98 0.6922493 0.8238552 Confidence level used: 0.95 $contrasts contrast estimate SE df t.ratio p.value Condition1 - Condition2 -0.04893538 0.03813262 62.07 -1.283 0.4099 Condition1 - Condition3 -0.10424628 0.03813262 62.07 -2.734 0.0219 Condition2 - Condition3 -0.05531090 0.03813262 62.07 -1.450 0.3217 P value adjustment: tukey method for comparing a family of 3 estimates 2. The multcomp package in two different ways - using mcp glht(exp.model,mcp(condition="Tukey")) resulting in Simultaneous Tests for General Linear Hypotheses Multiple Comparisons of Means: Tukey Contrasts Fit: lmer(formula = outcome ~ condition + (1 | participant) + (1 | pair), data = exp, REML = FALSE) Linear Hypotheses: Estimate Std. Error z value Pr(>|z|) Condition2 - Condition1 == 0 0.04894 0.03749 1.305 0.392 Condition3 - Condition1 == 0 0.10425 0.03749 2.781 0.015 * Condition3 - Condition2 == 0 0.05531 0.03749 1.475 0.303 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Adjusted p values reported -- single-step method) and using lsm glht(exp.model,lsm(pairwise~condition)) resulting in Note: df set to 62 Simultaneous Tests for General Linear Hypotheses Fit: lmer(formula = outcome ~ condition + (1 | participant) + (1 | pair), data = exp, REML = FALSE) Linear Hypotheses: Estimate Std. Error t value Pr(>|t|) Condition1 - Condition2 == 0 -0.04894 0.03749 -1.305 0.3977 Condition1 - Condition3 == 0 -0.10425 0.03749 -2.781 0.0195 * Condition2 - Condition3 == 0 -0.05531 0.03749 -1.475 0.3098 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 (Adjusted p values reported -- single-step method) As you can see, the methods give different results. This is my first time working with R and stats so something might be going wrong but I wouldn't know where. My questions are: What are the differences between the presented methods? I read in an answer to a related questions that it's about the degrees of freedom ( lsmeans vs. glht ). Are there some rules or recommendations when to use which one, i.e., method 1 is good for this type of data set/model etc.? Which result should I report? Without knowing better I'd probably just go and report the highest p-value I got to play it safe but it would be nice to have a better reason. Thanks
Not a complete answer... The difference between glht(myfit, mcp(myfactor="Tukey")) and the two other methods is that this way uses a "z" statistic (normal distribution), whereas the other ones use a "t" statistic (Student distribution). The "z" statistic it the same as a "t" statistic with an infinite degree of freedom. This method is an asymptotic one and it provides smaller p-values and shorter confidence intervals than the other ones. The p-values can be too small and the confidence intervals can be too short if the dataset is small. When I run lsmeans(myfit, pairwise~myfactor) the following message appears: Loading required namespace: pbkrtest That means that lsmeans (for a lmer model) uses the pbkrtest package which implements the Kenward & Rogers method for the degrees of freedom of the "t" statistic. This method intends to provide better p-values and confidence intervals than the asymptotic one (there's no difference when the degree of freedom is large). Now, about the difference between lsmeans(myfit, pairwise~myfactor)$contrasts and glht(myfit, lsm(pairwise~factor) , I have just done some tests and my observations are the following ones: lsm is an interface between the lsmeans package and the multcomp package (see ?lsm ) for a balanced design there's no difference between the results for an unbalanced design, I observed small differences between the results (the standard errors and the t ratio) Unfortunately I do not know what is the cause of these differences. It looks like lsm calls lsmeans only to get the linear hypotheses matrix and the degrees of freedom, but lsmeans uses a different way to calculate the standard errors.
{ "source": [ "https://stats.stackexchange.com/questions/204741", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/76621/" ] }
205,312
I'm studying statistics and often come across formulae containing the log and I'm always confused if I should interpret that as the standard meaning of log , i.e. base 10, or if in statistics the symbol log is generally assumed to be the natural log ln . In particular I'm studying the Good-Turing Frequency Estimation as an example, but my question is more of general one.
It's safe to assume that without explicit base $\log=\ln$ in statistics, because base 10 log is not used very often in statistics. However, other posters bring up a point that $\log_{10}$ or other bases can be common in some other fields, where statistics is applied, e.g. information theory. So, when you read papers in other fields, it gets confusing at times. Wikipedia's entropy page is a good example of confusing usage of $\log$. In the same page they mean base 2, $e$ and any base. You can figure out by the context which one is meant, but it requires reading the text. This is not a good way to present the material. Compare it to Logarithm page where the base is clearly shown in every formula or $\ln$ is used. I personally think this is the way to go: always show the base when $\log$ sign is used. This would also be ISO compliant for the standard doesn't define usage of unspecified base with $\log$ symbol as @Henry pointed out. Finally, ISO 31-11 standard prescribes $\text{lb}$ and $\lg$ signs for base 2 and 10 logarithms. Both are rarely used these days. I remember that we used $\lg$ in high school, but that was in another century in another world. I have never seen it since used in a statistical context. There isn't even the tag for $\text{lb}$ in LaTeX.
{ "source": [ "https://stats.stackexchange.com/questions/205312", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/44640/" ] }
206,592
We know that we must apply Benjamini Hochberg-like corrections for multiple hypothesis testing to experiments based on a single data set, in order to control the false discovery rate, otherwise all experiments that give a positive result could be false. But why don't we apply this same principle to all experiments since the beginning of time, regardless of where the data comes from? After all, over half of published scientific results which are deemed "significant" are now known to be false and irreproducible, and there is no reason why this couldn't just as easily be 100%. As scientists only tend to publish positive results, we have no idea of the number of negative results, so we have no idea if what we publish are only ever false positives - positive results that have cropped up by pure random chance under the null hypothesis. Meanwhile, there is nothing to say that the maths behind multiple hypothesis testing corrections should apply only to results from the same data set, and not to results from all experimental data acquired over time. It seems that the whole of science has become one big fishing expedition based on false or weak hypotheses, so how can we control for this? How can we control the false discovery rate, if all we ever publish are independent results taken without applying any correction for multiple hypothesis testing over all experiments performed to date? Is it possible to control the false discovery rate without applying some such correction?
This would obviously be an absolute nightmare to do in practice, but suppose it could be done: we appoint a Statistical Sultan and everyone running a hypothesis test reports their raw $p$-values to this despot. He performs some kind of global (literally) multiple comparisons correction and replies with the corrected versions. Would this usher in a golden age of science and reason? No, probably not. Let's start by considering one pair of hypotheses, as in a $t$-test. We measure some property of two groups and want to distinguish between two hypotheses about that property: $$\begin{align} H_0:& \textrm{ The groups have the same mean.} \\ H_A:& \textrm{ The groups have different means.} \end{align}$$ In a finite sample, the means are unlikely to be exactly equal even if $H_0$ really is true: measurement error and other sources of variability can push individual values around. However, the $H_0$ hypothesis is in some sense "boring", and researchers are typically concerned with avoiding a "false positive" situation wherein they claim to have found a difference between the groups where none really exists. Therefore, we only call results "significant" if they seem unlikely under the null hypothesis, and, by convention, that unlikeliness threshold is set at 5%. This applies to a single test. Now suppose you decide to run multiple tests and are willing to accept a 5% chance of mistakenly accepting $H_0$ for each one. With enough tests, you therefore almost certainly going to start making errors, and lots of them. The various multiple corrections approaches are intended to help you get back to a nominal error rate that you have already chosen to tolerate for individual tests. They do so in slightly different ways. Methods that control the Family-Wise Error Rate , like the Bonferroni , Sidak , and Holm procedures , say "You wanted a 5% chance of making an error on a single test, so we'll ensure that you there's no more than a 5% chance of making any errors across all of your tests." Methods that control the False Discovery Rate instead say "You are apparently okay with being wrong up to 5% of the time with a single test, so we'll ensure that no more than 5% of your 'calls' are wrong when doing multiple tests". (See the difference?) Now, suppose you attempted to control the family-wise error rate of all hypothesis tests ever run. You are essentially saying that you want a <5% chance of falsely rejecting any null hypothesis, ever. This sets up an impossibly stringent threshold and inference would be effectively useless but there's an even more pressing issue: your global correction means you are testing absolutely nonsensical "compound hypotheses" like $$\begin{align} H_1: &\textrm{Drug XYZ changes T-cell count } \wedge \\ &\textrm{Grapes grow better in some fields } \wedge&\\ &\ldots \wedge \ldots \wedge \ldots \wedge \ldots \wedge \\&\textrm{Men and women eat different amounts of ice cream} \end{align} $$ With False Discovery Rate corrections, the numerical issue isn't quite so severe, but it is still a mess philosophically. Instead, it makes sense to define a "family" of related tests, like a list of candidate genes during a genomics study, or a set of time-frequency bins during a spectral analysis. Tailoring your family to a specific question lets you actually interpret your Type I error bound in a direct way. For example, you could look at a FWER-corrected set of p-values from your own genomic data and say "There's a <5% chance that any of these genes are false positives." This is a lot better than a nebulous guarantee that covers inferences done by people you don't care about on topics you don't care about. The flip side of this is that he appropriate choice of "family" is debatable and a bit subjective (Are all genes one family or can I just consider the kinases?) but it should be informed by your problem and I don't believe anyone has seriously advocated defining families nearly so extensively. How about Bayes? Bayesian analysis offers coherent alternative to this problem--if you're willing to move a bit away from the Frequentist Type I/Type II error framework. We start with some non-committal prior over...well...everything. Every time we learn something, that information is combined with the prior to generate a posterior distribution, which in turn becomes the prior for the next time we learn something. This gives you a coherent update rule and you could compare different hypotheses about specific things by calculating the Bayes factor between two hypotheses. You could presumably factor out large chunks of the model, which wouldn't even make this particularly onerous. There is a persistent...meme that Bayesian methods don't require multiple comparisons corrections. Unfortunately, the posterior odds are just another test statistic for frequentists (i.e., people who care about Type I/II errors). They don't have any special properties that control these types of errors (Why would they?) Thus, you're back in intractable territory, but perhaps on slightly more principled ground. The Bayesian counter-argument is that we should focus on what we can know now and thus these error rates aren't as important. On Reproduciblity You seem to be suggesting that improper multiple comparisons-correction is the reason behind a lot of incorrect/unreproducible results. My sense is that other factors are more likely to be an issue. An obvious one is that pressure to publish leads people to avoid experiments that really stress their hypothesis (i.e., bad experimental design). For example, [in this experiment] (part of Amgen's (ir)reproduciblity initative 6 , it turns out that the mice had mutations in genes other than the gene of interest. Andrew Gelman also likes to talk about the Garden of Forking Paths , wherein researchers choose a (reasonable) analysis plan based on the data, but might have done other analyses if the data looked different. This inflates $p$-values in a similar way to multiple comparisons, but is much harder to correct for afterward. Blatantly incorrect analysis may also play a role, but my feeling (and hope) is that that is gradually improving.
{ "source": [ "https://stats.stackexchange.com/questions/206592", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/109510/" ] }
207,425
Is there some intuitive explanation for this terminology? Why is it this way, and not the predictor(s) being regressed on the outcome? Ideally I'm hoping that a proper explanation of why this terminology exists will help students remember it, and stop them from saying it the wrong way around.
I do not know what the etymology of "is regressed on" is but here is the interpretation that I have in mind when I am saying or hearing this expression. Consider the following figure from The Elements of Statistical Learning by Hastie et al.: In its core, linear regression amounts to orthogonal projection of $\mathbf y$ on (onto) $\mathbf X$, where $\mathbf y$ is the $n$-dimensional vector of observations of the dependent variable and $\mathbf X$ is the subspace spanned by the predictor vectors. This is a very useful interpretation of linear regression. Since $y$ is being projected on $X$, that is what I think when I hear that $y$ is "regressed on" $X$. From this point of view, it would make less sense to say that $X$ is regressed on $y$ or that $y$ is regressed "against" or "with" $X$. Ideally I'm hoping that a proper explanation of why this terminology exists will help students remember it, and stop them from saying it the wrong way around. As I said, I doubt that this is an explanation of why this terminology exists (perhaps only of why it persists?), but I am sure it can help students remember it.
{ "source": [ "https://stats.stackexchange.com/questions/207425", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
207,434
I have data from 214 countries that range from 1990 to 2014. My dependent variables (I'm doing more than one regression) are just primary/secondary net enrollment rates for both sexes/males/females, so six variables. My independent variables are 'energy consumption (kWh per capita),' 'ict goods imports (% of total imports),' 'internet users (per 100 people),' '% with access to improved sanitation facilities,' 'health expenditure per capita PPP,' 'GNI PPP,' '#ofteachers in whatever I'm regressing,' and 'age dependency ratio, young.' I'm using a panel data fixed effects model. My question is this β€” do I keep all of these independent variables as is? I have one professor who has suggested that I change them into growth rates, but the data has empty spaces in between different years in the same country.
I do not know what the etymology of "is regressed on" is but here is the interpretation that I have in mind when I am saying or hearing this expression. Consider the following figure from The Elements of Statistical Learning by Hastie et al.: In its core, linear regression amounts to orthogonal projection of $\mathbf y$ on (onto) $\mathbf X$, where $\mathbf y$ is the $n$-dimensional vector of observations of the dependent variable and $\mathbf X$ is the subspace spanned by the predictor vectors. This is a very useful interpretation of linear regression. Since $y$ is being projected on $X$, that is what I think when I hear that $y$ is "regressed on" $X$. From this point of view, it would make less sense to say that $X$ is regressed on $y$ or that $y$ is regressed "against" or "with" $X$. Ideally I'm hoping that a proper explanation of why this terminology exists will help students remember it, and stop them from saying it the wrong way around. As I said, I doubt that this is an explanation of why this terminology exists (perhaps only of why it persists?), but I am sure it can help students remember it.
{ "source": [ "https://stats.stackexchange.com/questions/207434", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/112277/" ] }
207,760
It's obvious many times why one prefers an unbiased estimator. But, are there any circumstances under which we might actually prefer a biased estimator over an unbiased one?
Yes. Often it is the case that we are interested in minimizing the mean squared error, which can be decomposed into variance + bias squared . This is an extremely fundamental idea in machine learning, and statistics in general. Frequently we see that a small increase in bias can come with a large enough reduction in variance that the overall MSE decreases. A standard example is ridge regression. We have $\hat \beta_R = (X^T X + \lambda I)^{-1}X^T Y$ which is biased; but if $X$ is ill conditioned then $Var(\hat \beta) \propto (X^T X)^{-1}$ may be monstrous whereas $Var(\hat \beta_R)$ can be much more modest. Another example is the kNN classifier . Think about $k = 1$ : we assign a new point to its nearest neighbor. If we have a ton of data and only a few variables we can probably recover the true decision boundary and our classifier is unbiased; but for any realistic case, it is likely that $k = 1$ will be far too flexible (i.e. have too much variance) and so the small bias is not worth it (i.e. the MSE is larger than more biased but less variable classifiers). Finally, here's a picture. Suppose that these are the sampling distributions of two estimators and we are trying to estimate 0. The flatter one is unbiased, but also much more variable. Overall I think I'd prefer to use the biased one, because even though on average we won't be correct, for any single instance of that estimator we'll be closer. $$ \ $$ Update I mention the numerical issues that happen when $X$ is ill conditioned and how ridge regression helps. Here's an example. I'm making a matrix $X$ which is $4 \times 3$ and the third column is nearly all 0, meaning that it is almost not full rank, which means that $X^T X$ is really close to being singular. x <- cbind(0:3, 2:5, runif(4, -.001, .001)) ## almost reduced rank > x [,1] [,2] [,3] [1,] 0 2 0.000624715 [2,] 1 3 0.000248889 [3,] 2 4 0.000226021 [4,] 3 5 0.000795289 (xtx <- t(x) %*% x) ## the inverse of this is proportional to Var(beta.hat) [,1] [,2] [,3] [1,] 14.0000000 26.00000000 3.08680e-03 [2,] 26.0000000 54.00000000 6.87663e-03 [3,] 0.0030868 0.00687663 1.13579e-06 eigen(xtx)$values ## all eigenvalues > 0 so it is PD, but not by much [1] 6.68024e+01 1.19756e+00 2.26161e-07 solve(xtx) ## huge values [,1] [,2] [,3] [1,] 0.776238 -0.458945 669.057 [2,] -0.458945 0.352219 -885.211 [3,] 669.057303 -885.210847 4421628.936 solve(xtx + .5 * diag(3)) ## very reasonable values [,1] [,2] [,3] [1,] 0.477024087 -0.227571147 0.000184889 [2,] -0.227571147 0.126914719 -0.000340557 [3,] 0.000184889 -0.000340557 1.999998999 Update 2 As promised, here's a more thorough example. First, remember the point of all of this: we want a good estimator. There are many ways to define 'good'. Suppose that we've got $X_1, ..., X_n \sim \ iid \ \mathcal N(\mu, \sigma^2)$ and we want to estimate $\mu$ . Let's say that we decide that a 'good' estimator is one that is unbiased. This isn't optimal because, while it is true that the estimator $T_1(X_1, ..., X_n) = X_1$ is unbiased for $\mu$ , we have $n$ data points so it seems silly to ignore almost all of them. To make that idea more formal, we think that we ought to be able to get an estimator that varies less from $\mu$ for a given sample than $T_1$ . This means that we want an estimator with a smaller variance. So maybe now we say that we still want only unbiased estimators, but among all unbiased estimators we'll choose the one with the smallest variance. This leads us to the concept of the uniformly minimum variance unbiased estimator (UMVUE), an object of much study in classical statistics. IF we only want unbiased estimators, then choosing the one with the smallest variance is a good idea. In our example, consider $T_1$ vs. $T_2(X_1, ..., X_n) = \frac{X_1 + X_2}{2}$ and $T_n(X_1, ..., X_n) = \frac{X_1 + ... + X_n}{n}$ . Again, all three are unbiased but they have different variances: $Var(T_1) = \sigma^2$ , $Var(T_2) = \frac{\sigma^2}{2}$ , and $Var(T_n) = \frac{\sigma^2}{n}$ . For $n > 2$ $T_n$ has the smallest variance of these, and it's unbiased, so this is our chosen estimator. But often unbiasedness is a strange thing to be so fixated on (see @Cagdas Ozgenc's comment, for example). I think this is partly because we generally don't care so much about having a good estimate in the average case, but rather we want a good estimate in our particular case. We can quantify this concept with the mean squared error (MSE) which is like the average squared distance between our estimator and the thing we're estimating. If $T$ is an estimator of $\theta$ , then $MSE(T) = E((T - \theta)^2)$ . As I've mentioned earlier, it turns out that $MSE(T) = Var(T) + Bias(T)^2$ , where bias is defined to be $Bias(T) = E(T) - \theta$ . Thus we may decide that rather than UMVUEs we want an estimator that minimizes MSE. Suppose that $T$ is unbiased. Then $MSE(T) = Var(T) = Bias(T)^2 = Var(T)$ , so if we are only considering unbiased estimators then minimizing MSE is the same as choosing the UMVUE. But, as I showed above, there are cases where we can get an even smaller MSE by considering non-zero biases. In summary, we want to minimize $Var(T) + Bias(T)^2$ . We could require $Bias(T) = 0$ and then pick the best $T$ among those that do that, or we could allow both to vary. Allowing both to vary will likely give us a better MSE, since it includes the unbiased cases. This idea is the variance-bias trade-off that I mentioned earlier in the answer. Now here are some pictures of this trade-off. We're trying to estimate $\theta$ and we've got five models, $T_1$ through $T_5$ . $T_1$ is unbiased and the bias gets more and more severe until $T_5$ . $T_1$ has the largest variance and the variance gets smaller and smaller until $T_5$ . We can visualize the MSE as the square of the distance of the distribution's center from $\theta$ plus the square of the distance to the first inflection point (that's a way to see the SD for normal densities, which these are). We can see that for $T_1$ (the black curve) the variance is so large that being unbiased doesn't help: there's still a massive MSE. Conversely, for $T_5$ the variance is way smaller but now the bias is big enough that the estimator is suffering. But somewhere in the middle there is a happy medium, and that's $T_3$ . It has reduced the variability by a lot (compared with $T_1$ ) but has only incurred a small amount of bias, and thus it has the smallest MSE. You asked for examples of estimators that have this shape: one example is ridge regression, where you can think of each estimator as $T_\lambda(X, Y) = (X^T X + \lambda I)^{-1} X^T Y$ . You could (perhaps using cross-validation) make a plot of MSE as a function of $\lambda$ and then choose the best $T_\lambda$ .
{ "source": [ "https://stats.stackexchange.com/questions/207760", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/68473/" ] }
207,794
I'm training a neural network to classify a set of objects into n-classes. Each object can belong to multiple classes at the same time (multi-class, multi-label). I read that for multi-class problems it is generally recommended to use softmax and categorical cross entropy as the loss function instead of mse and I understand more or less why. For my problem of multi-label it wouldn't make sense to use softmax of course as each class probability should be independent from the other. So my final layer is just sigmoid units that squash their inputs into a probability range 0..1 for every class. Now I'm not sure what loss function I should use for this. Looking at the definition of categorical crossentropy I believe it would not apply well to this problem as it will only take into account the output of neurons that should be 1 and ignores the others. Binary cross entropy sounds like it would fit better, but I only see it ever mentioned for binary classification problems with a single output neuron. I'm using python and keras for training in case it matters.
If you are using keras, just put sigmoids on your output layer and binary_crossentropy on your cost function. If you are using tensorflow, then can use sigmoid_cross_entropy_with_logits . But for my case this direct loss function was not converging. So I ended up using explicit sigmoid cross entropy loss $(y \cdot \ln(\text{sigmoid}(\text{logits})) + (1-y) \cdot \ln(1-\text{sigmoid}(\text{logits})))$ . You can make your own like in this Example Sigmoid, unlike softmax don't give probability distribution around $n_{classes}$ as output, but independent probabilities. If on average any row is assigned less labels then you can use softmax_cross_entropy_with_logits because with this loss while the classes are mutually exclusive, their probabilities need not be. All that is required is that each row of labels is a valid probability distribution. If they are not, the computation of the gradient will be incorrect.
{ "source": [ "https://stats.stackexchange.com/questions/207794", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/112508/" ] }
207,815
I am fairly new to random forests. In the past, I have always compared the accuracy of fit vs test against fit vs train to detect any overfitting. But I just read here that: "In random forests, there is no need for cross-validation or a separate test set to get an unbiased estimate of the test set error. It is estimated internally , during the run..." The small paragraph above can be found under the The out-of-bag (oob) error estimate Section. This Out of Bag Error concept is completely new to me and what's a little confusing is how the OOB error in my model is 35% (or 65% accuracy), but yet, if I apply cross validation to my data (just a simple holdout method) and compare both fit vs test against fit vs train I get a 65% accuracy and a 96% accuracy respectively. In my experience, this is considered overfitting but the OOB holds a 35% error just like my fit vs test error. Am I overfitting? Should I even be using cross validation to check for overfitting in random forests? In short, I am not sure whether I should trust the OOB to get an unbiased error of the test set error when my fit vs train indicates that I am overfitting!
training error (as in predict(model, data=train) ) is typically useless. Unless you do (non-standard) pruning of the trees, it cannot be much above 0 by design of the algorithm . Random forest uses bootstrap aggregation of decision trees, which are known to be overfit badly. This is like training error for a 1-nearest-neighbour classifier. However, the algorithm offers a very elegant way of computing the out-of-bag error estimate which is essentially an out-of-bootstrap estimate of the aggregated model's error). The out-of-bag error is the estimated error for aggregating the predictions of the $\approx \frac{1}{e}$ fraction of the trees that were trained without that particular case. The models aggregated for the out-of-bag error will only be independent, if there is no dependence between the input data rows. I.e. each row = one independent case, no hierarchical data structure / no clustering / no repeated measurements. So the out-of-bag error is not exactly the same (less trees for aggregating, more training case copies) as a cross validation error, but for practical purposes it is close enough. What would make sense to look at in order to detect overfitting is comparing out-of-bag error with an external validation. However, unless you know about clustering in your data, a "simple" cross validation error will be prone to the same optimistic bias as the out-of-bag error: the splitting is done according to very similar principles. You'd need to compare out-of-bag or cross validation with error for a well-designed test experiment to detect this.
{ "source": [ "https://stats.stackexchange.com/questions/207815", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/108416/" ] }
208,529
Merkle & Steyvers (2013) write: To formally define a proper scoring rule, let $f$ be a probabilistic forecast of a Bernoulli trial $d$ with true success probability $p$. Proper scoring rules are metrics whose expected values are minimized if $f = p$. I get that this is good because we want to encourage forecasters to generating forecasts that honestly reflect their true beliefs, and don't want to give them perverse incentives to do otherwise. Are there any real-world examples in which it's appropriate to use an improper scoring rule? Reference Merkle, E. C., & Steyvers, M. (2013). Choosing a strictly proper scoring rule. Decision Analysis, 10(4), 292-304
It is appropriate to use an improper scoring rule when the purpose is actually forecasting, but not inference. I don't really care whether another forecaster is cheating or not when I am the one who is going to be doing the forecast. Proper scoring rules ensure that during estimation process the model approaches the true data generating process (DGP). This sounds promising because as we approach the true DGP we will be also doing good in terms of forecasting under any loss function. The catch is that most of the time (actually in reality almost always) our model search space doesn't contain the true DGP. We end up approximating the true DGP with some functional form that we propose. In this more realistic setting, if our forecasting task is easier than to figure out the entire density of the true DGP we may actually do better. This is especially true for classification. For example the true DGP can be very complex but the classification task can be very easy. Yaroslav Bulatov provided the following example in his blog: http://yaroslavvb.blogspot.ro/2007/06/log-loss-or-hinge-loss.html As you can see below the true density is wiggly but it is very easy to build a classifier to separate data generated by this into two classes. Simply if $x \ge 0$ output class 1, and if $x < 0$ output class 2. Instead of matching the exact density above we propose the below crude model, which is quite far from the true DGP. However it does perfect classification. This is found by using hinge loss, which is not proper. On the other hand if you decide to find the true DGP with log-loss (which is proper) then you start fitting some functionals, as you don't know what the exact functional form you need a priori. But as you try harder and harder to match it, you start misclassifying things. Note that in both cases we used the same functional forms. In the improper loss case it degenerated into a step function which in turn did perfect classification. In the proper case it went berserk trying to satisfy every region of the density. Basically we don't always need to achieve the true model to have accurate forecasts. Or sometimes we don't really need to do good on the entire domain of the density, but be very good only on certain parts of it.
{ "source": [ "https://stats.stackexchange.com/questions/208529", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/9162/" ] }
208,936
I don't have computer vision background, yet when I read some image processing and convolutional neural networks related articles and papers, I constantly face the term, translation invariance , or translation invariant . Or I read alot that the convolution operation provides translation invariance ?!! what does this mean? I myself always translated it to myself as if it means if we change an image in any shape, the actual concept of the image doesn't change. For example if I rotate an image of a lets say tree, it's again a tree no matter what I do to that picture. And I myself consider all operations that can happen to an image and transform it in a way (crop it, resize it, gray-scale it,color it etc...) to be this way. I have no idea if this is true so I would be grateful if anyone could explain this to me .
You're on the right track. Invariance means that you can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object's identity, category, (etc) across changes in the specifics of the visual input, like relative positions of the viewer/camera and the object. The image below contains many views of the same statue. You (and well-trained neural networks) can recognize that the same object appears in every picture, even though the actual pixel values are quite different. Note that translation here has a specific meaning in vision, borrowed from geometry. It does not refer to any type of conversion, unlike say, a translation from French to English or between file formats. Instead, it means that each point/pixel in the image has been moved the same amount in the same direction. Alternately, you can think of the origin as having been shifted an equal amount in the opposite direction. For example, we can generate the 2nd and 3rd images in the first row from the first by moving each pixel 50 or 100 pixels to the right. One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$ , it doesn't matter if you translate the convolved output $f*g$ , or if you translate $f$ or $g$ first, then convolve them. Wikipedia has a bit more . One approach to translation-invariant object recognition is to take a "template" of the object and convolve it with every possible location of the object in the image. If you get a large response at a location, it suggests that an object resembling the template is located at that location. This approach is often called template-matching . Invariance vs. Equivariance Santanu_Pattanayak's answer ( here ) points out that there is a difference between translation invariance and translation equivariance . Translation invariance means that the system produces exactly the same response, regardless of how its input is shifted. For example, a face-detector might report "FACE FOUND" for all three images in the top row. Equivariance means that the system works equally well across positions, but its response shifts with the position of the target. For example, a heat map of "face-iness" would have similar bumps at the left, center, and right when it processes the first row of images. This is is sometimes an important distinction, but many people call both phenomena "invariance", especially since it is usually trivial to convert an equivariant response into an invariant one--just disregard all the position information).
{ "source": [ "https://stats.stackexchange.com/questions/208936", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/14306/" ] }
208,940
I am using glasso function from glasso package, as follow: obj <- glasso(var(X), rho = 0.09, zero = info, approx=TRUE) Regardless of rho value, all of entries in obj$w , estimated covariance matrix, are zero. Do you have any idea why this happens? For your information, the dimension of var(X) is 1990 x 1990 and the number of rows in info is 1959841 . EDIT: You can download X and info variables as RData file from this link: https://www.dropbox.com/s/t9s4iw6ulbys72o/varX.RData?dl=0
You're on the right track. Invariance means that you can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object's identity, category, (etc) across changes in the specifics of the visual input, like relative positions of the viewer/camera and the object. The image below contains many views of the same statue. You (and well-trained neural networks) can recognize that the same object appears in every picture, even though the actual pixel values are quite different. Note that translation here has a specific meaning in vision, borrowed from geometry. It does not refer to any type of conversion, unlike say, a translation from French to English or between file formats. Instead, it means that each point/pixel in the image has been moved the same amount in the same direction. Alternately, you can think of the origin as having been shifted an equal amount in the opposite direction. For example, we can generate the 2nd and 3rd images in the first row from the first by moving each pixel 50 or 100 pixels to the right. One can show that the convolution operator commutes with respect to translation. If you convolve $f$ with $g$ , it doesn't matter if you translate the convolved output $f*g$ , or if you translate $f$ or $g$ first, then convolve them. Wikipedia has a bit more . One approach to translation-invariant object recognition is to take a "template" of the object and convolve it with every possible location of the object in the image. If you get a large response at a location, it suggests that an object resembling the template is located at that location. This approach is often called template-matching . Invariance vs. Equivariance Santanu_Pattanayak's answer ( here ) points out that there is a difference between translation invariance and translation equivariance . Translation invariance means that the system produces exactly the same response, regardless of how its input is shifted. For example, a face-detector might report "FACE FOUND" for all three images in the top row. Equivariance means that the system works equally well across positions, but its response shifts with the position of the target. For example, a heat map of "face-iness" would have similar bumps at the left, center, and right when it processes the first row of images. This is is sometimes an important distinction, but many people call both phenomena "invariance", especially since it is usually trivial to convert an equivariant response into an invariant one--just disregard all the position information).
{ "source": [ "https://stats.stackexchange.com/questions/208940", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/74753/" ] }
209,426
After reading this blog post about Bayesian structural time series models, I wanted to look at implementing this in the context of a problem I'd previously used ARIMA for. I have some data with some known (but noisy) seasonal components - there are definitely an annual, monthly and weekly components to this, and also some effects due to special days (such as federal or religious holidays). I have used the bsts package to implement this and as far as I can tell I haven't done anything wrong, although the components and prediction simply don't look as I'd expect. It isn't clear to me if my implementation is wrong, incomplete or has some other problem. The full time series looks like this: I can train the model on some subset of the data, and the model generally looks good in terms of the fit (plot is below). The code I am using to do this is here: library(bsts) predict_length = 90 training_cut_date <- '2015-05-01' test_cut_date <- as.Date(training_cut_date) + predict_length df = read.csv('input.tsv', sep ='\t') df$date <- as.Date(as.character(df$date),format="%Y-%m-%d") df_train = df[df$date < training_cut_date,] yts <- xts(log10(df_train$count), order.by=df_train$date) ss <- AddLocalLinearTrend(list(), yts) ss <- AddSeasonal(ss, yts, nseasons = 7) ss <- AddSeasonal(ss, yts, nseasons = 12) ss <- AddNamedHolidays(ss, named.holidays = NamedHolidays(), yts) model <- bsts(yts, state.specification = ss, niter = 500, seed=2016) The model looks reasonable: But if I plot the prediction then firstly the trend is completely wrong, and secondly the uncertainty grows VERY quickly - to the point where I can't show the uncertainty band on the same plot as the predictions without making the y axis on a log-scale. The code for this part is here: burn <- SuggestBurn(0.1, model) pred <- predict(model, horizon = predict_length, burn = burn, quantiles = c(.025, .975)) The pure prediction looks like this: And then when scaled back to the initial distribution (with the dotted line showing the transition from training to prediction, the problems are obvious: I have tried adding more seasonal trends, removing seasonal trends, adding an AR term, changing the AddLocalLinearModel to AddGeneralizedLocalLinearTrend and several other things concerning tweaking the model, but nothing has resolved the issues and made the predictions more meaningful. In some cases the direction changes, so rather than dropping to 0 the prediction just continues to increase as a function of time. I definitely don't understand why the model is breaking down in this way. Any suggestions would be very welcome.
Steve Scott here. I wrote the bsts package. I have a few suggestions for you. First, your seasonal components aren't doing what you think they are. I think you have daily data, because you're trying to add a 7 season component, which should be working correctly. But you've told your annual seasonal component to repeat every 12 days. Getting a monthly seasonal component with daily data is kind of hard to do, but you can do a 52 week seasonal by AddSeasonal(..., nseasons = 52, season.duration = 7) . The seasonal.duration argument tells the model how many time points each season should last for. The nseasons argument tells it how many seasons are in a cycle. The total number of time points in a cycle is season.duration * nseasons . The second suggestion is that you might want to think about a different model for trend. The LocalLinearTrend model is very flexible, but this flexibility can show up as undesired variance in long term forecasts. There are some other trend models that contain a bit more structure. GeneralizedLocalLinearTrend (sorry about the nondescriptive name) assumes the "slope" component of trend is an AR1 process instead of a random walk. It is my default option if I want to forecast far into the future. Most of your time series variation seems to come from seasonality, so you might try AddLocalLevel or even AddAr instead of AddLocalLinearTrend . Finally, in general if you're getting strange forecasts, and you want to figure out which part of the model is to blame, try plot(model, "components") to see the model decomposed into the individual pieces you've requested.
{ "source": [ "https://stats.stackexchange.com/questions/209426", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/42494/" ] }
210,403
The answers (definitions) defined on Wikipedia are arguably a bit cryptic to those unfamiliar with higher mathematics/statistics. In mathematical terms, a statistical model is usually thought of as a pair ($S, \mathcal{P}$), where $S$ is the set of possible observations, i.e. the sample space, and $\mathcal{P}$ is a set of probability distributions on $S$. In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found whose sample space is non-numerical, where the distribution would be a categorical distribution. I am a high school student very interested in this field as a hobby and am currently struggling with the differences between what is a statistical model and a probability distribution My current, and very rudimentary, understanding is this: statistical models are mathematical attempts to approximate measured distributions probability distributions are measured descriptions from experiments that assigns probabilities to each possible outcome of a random event confusion is further compounded by the tendency in literature to see the words "distribution" and "model" used interchangeably - or at least in very similar situations (e.g. binomial distribution vs binomial model) Can someone verify/correct my definitions, and perhaps offer a more formalized (albeit still in terms of simple english) approach to these concepts?
Probability distribution is a mathematical function that describes a random variable. A little bit more precisely, it is a function that assigns probabilities to numbers and it's output has to agree with axioms of probability . Statistical model is an abstract, idealized description of some phenomenon in mathematical terms using probability distributions. Quoting Wasserman (2013): A statistical model $\mathfrak{F}$ is a set of distributions (or densities or regression functions). A parametric model is a set $\mathfrak{F}$ that can be parameterized by a finite number of parameters. [...] In general, a parametric model takes the form $$ \mathfrak{F} = \{ f (x; \theta) : \theta \in \Theta \} $$ where $\theta$ is an unknown parameter (or vector of parameters) that can take values in the parameter space $\Theta$. If $\theta$ is a vector but we are only interested in one component of $\theta$, we call the remaining parameters nuisance parameters . A nonparametric model is a set $\mathfrak{F}$ that cannot be parameterized by a finite number of parameters. In many cases we use distributions as models (you can check this example ). You can use binomial distribution as a model of counts of heads in series of coin throws. In such case we assume that this distribution describes, in simplified way, the actual outcomes. This does not mean that this is an only way on how you can describe such phenomenon, neither that binomial distribution is something that can be used only for this purpose. Model can use one or more distributions, while Bayesian models specify also prior distributions. More formally this is discussed by McCullaugh (2002): According to currently accepted theories [Cox and Hinkley (1974), Chapter 1; Lehmann (1983), Chapter 1; Barndorff-Nielsen and Cox (1994), Section 1.1; Bernardo and Smith (1994), Chapter 4] a statistical model is a set of probability distributions on the sample space $\mathcal{S}$. A parameterized statistical model is a parameter $\Theta$ set together with a function $P : \Theta \rightarrow \mathcal{P} (\mathcal{S})$, which assigns to each parameter point $\mathcal{\theta \in \Theta}$ a probability distribution $P \theta$ on $\mathcal{S}$. Here $\mathcal{P}(\mathcal{S})$ is the set of all probability distributions on $\mathcal{S}$. In much of the following, it is important to distinguish between the model as a function $ P : \Theta \rightarrow \mathcal{P} (\mathcal{S}) $, and the associated set of distributions $P\Theta \subset \mathcal{P} (\mathcal{S})$. So statistical models use probability distributions to describe data in their terms. Parametric models are also described in terms of finite set of parameters. This does not mean that all statistical methods need probability distributions. For example, linear regression is often described in terms of normality assumption , but in fact it is pretty robust to departures from normality and we need assumption about normality of errors for confidence intervals and hypothesis testing. So for regression to work we don't need such assumption, but to have fully specified statistical model we need to describe it in terms of random variables , so we need probability distributions. I write about this because you can often hear people saying that they used regression model for their data -- in most such cases they rather mean that they describe data in terms of linear relation between target values and predictors using some parameters, than insisting on conditional normality. McCullagh, P. (2002). What is a statistical model? Annals of statistics, 1225-1267. Wasserman, L. (2013). All of statistics: a concise course in statistical inference. Springer.
{ "source": [ "https://stats.stackexchange.com/questions/210403", "https://stats.stackexchange.com", "https://stats.stackexchange.com/users/100529/" ] }