source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
112,471 | I have two stationary time series. I would like to check for cointegration between them. Does this make sense, and can I just use Engle-Granger Test (two step) for Cointegration for this? | Say you have some data. Say you're willing to assume that the data comes from some distribution -- perhaps Gaussian. There are an infinite number of different Gaussians that the data could have come from (which correspond to the combination of the infinite number of means and variances that a Gaussian distribution can have). MLE will pick the Gaussian (i.e., the mean and variance) that is "most consistent" with your data (the precise meaning of consistent is explained below). So, say you've got a data set of $y = \{-1, 3, 7\}$ . The most consistent Gaussian from which that data could have come has a mean of 3 and a variance of 16. It could have been sampled from some other Gaussian. But one with a mean of 3 and variance of 16 is most consistent with the data in the following sense: the probability of getting the particular $y$ values you observed is greater with this choice of mean and variance, than it is with any other choice. Moving to regression: instead of the mean being a constant, the mean is a linear function of the data, as specified by the regression equation. So, say you've got data like $x = \{ 2,4,10 \}$ along with $y$ from before. The mean of that Gaussian is now the fitted regression model $X'\hat\beta$ , where $\hat\beta =[-1.9,.9]$ Moving to GLMs: replace Gaussian with some other distribution (from the exponential family). The mean is now a linear function of the data, as specified by the regression equation, transformed by the link function. So, it's $g(X'\beta)$ , where $g(x) = e^x/(1+e^x)$ for logit (with binomial data). | {
"source": [
"https://stats.stackexchange.com/questions/112471",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/32175/"
]
} |
112,692 | I was hoping someone could propose an argument explaining why the random variables $Y_1=X_2-X_1$ and $Y_2=X_1+X_2$, $X_i$ having the standard normal distribution, are statistically independent. The proof for that fact follows easily from the MGF technique, yet I find it extremely counter-intuitive. I would appreciate therefore the intuition here, if any. Thank you in advance. EDIT : The subscripts do not indicate Order Statistics but IID observations from the standard normal distrubution. | This is standard normal distributed data: Notice that the distribution is circulary symmetric. When you switch to $Y_1 = X_2 - X_1$ and $Y_2 = X_1 + X_2$, you effectively rotate and scale the axis, like this: This new coordinate system has the same origin as the original one, and the axis are orthogonal. Due to the circulary symmetry, the variables are still independent in the new coordinate system. | {
"source": [
"https://stats.stackexchange.com/questions/112692",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31420/"
]
} |
113,230 | I need to generate random numbers following Normal distribution within the interval $(a,b)$. (I am working in R.) I know the function rnorm(n,mean,sd) will generate random numbers following normal distribution,but how to set the interval limits within that? Is there any particular R functions available for that? | It sounds like you want to simulate from a truncated distribution , and in your specific example, a truncated normal . There are a variety of methods for doing so, some simple, some relatively efficient. I'll illustrate some approaches on your normal example. Here's one very simple method for generating one at a time (in some kind of pseudocode): $\tt{repeat}$ generate $x_i$ from N(mean,sd) $\tt{until}$ lower $\leq x_i\leq$ upper If most of the distribution is within the bounds, this is pretty reasonable but it can get quite slow if you nearly always generate outside the limits. In R you could avoid the one-at-a-time loop by computing the area within the bounds and generate enough values that you could be almost certain that after throwing out the values outside the bounds you still had as many values as needed. You could use accept-reject with some suitable majorizing function over the interval (in some cases uniform will be good enough). If the limits were reasonably narrow relative to the s.d. but you weren't far into the tail, a uniform majorizing would work okay with the normal, for example. If you have a reasonably efficient cdf and inverse cdf (such as pnorm and qnorm for the normal distribution in R) you can use the inverse-cdf method described in the first paragraph of the simulating section of the Wikipedia page on the truncated normal . [In effect this is the same as taking a truncated uniform (truncated at the required quantiles, which actually requires no rejections at all, since that's just another uniform) and apply the inverse normal cdf to that. Note that this can fail if you're far into the tail] There are other approaches; the same Wikipedia page mentions adapting the ziggurat method, that should work for a variety of distributions. The same Wikipedia link mentions two specific packages (both on CRAN) with functions for generating truncated normals: The MSM package in R has a function, rtnorm , that calculates draws from a truncated normal. The truncnorm package in R also has functions to draw from a truncated normal. Looking around, a lot of this is covered in answers on other questions (but not exactly duplicates since this question is more general than just the truncated normal) ... see additional discussion in a. This answer b. Xi'an's answer here , which has a link to his arXiv paper (along with some other worthwhile responses). | {
"source": [
"https://stats.stackexchange.com/questions/113230",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/54697/"
]
} |
113,485 | After some searching, I find very little on the incorporation of observation weights/measurement errors into principal components analysis. What I do find tends to rely on iterative approaches to include weightings (e.g., here ). My question is why is this approach necessary? Why can't we use the eigenvectors of the weighted covariance matrix? | It depends on what exactly your weights apply to. Row weights Let $\mathbf{X}$ be the data matrix with variables in columns and $n$ observations $\mathbf x_i$ in rows. If each observation has an associated weight $w_i$ , then it is indeed straightforward to incorporate these weights into PCA. First, one needs to compute the weighted mean $\boldsymbol \mu = \frac{1}{\sum w_i}\sum w_i \mathbf x_i$ and subtract it from the data in order to center it. Then we compute the weighted covariance matrix $\frac{1}{\sum w_i}\mathbf X^\top \mathbf W \mathbf X$ , where $\mathbf W = \operatorname{diag}(w_i)$ is the diagonal matrix of weights, and apply standard PCA to analyze it. Cell weights The paper by Tamuz et al., 2013 , that you found, considers a more complicated case when different weights $w_{ij}$ are applied to each element of the data matrix. Then indeed there is no analytical solution and one has to use an iterative method. Note that, as acknowledged by the authors, they reinvented the wheel, as such general weights have certainly been considered before, e.g. in Gabriel and Zamir, 1979, Lower Rank Approximation of Matrices by Least Squares With Any Choice of Weights . This was also discussed here . As an additional remark: if the weights $w_{ij}$ vary with both variables and observations, but are symmetric, so that $w_{ij}=w_{ji}$ , then analytic solution is possible again, see Koren and Carmel, 2004, Robust Linear Dimensionality Reduction . | {
"source": [
"https://stats.stackexchange.com/questions/113485",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/46012/"
]
} |
113,602 | I have three groups of data, each with a binomial distribution (i.e. each group has elements that are either success or failure). I do not have a predicted probability of success, but instead can only rely on the success rate of each as an approximation for the true success rate. I have only found this question , which is close but does not seem to exactly deal with the this scenario. To simplify down the test, let's just say that I have 2 groups (3 can be extended from this base case). Group Trials $n_i$ Successes $k_i$ Percentage $p_i$ Group 1 2455 1556 63.4% Group 2 2730 1671 61.2% I don't have an expected success probability, only what I know from the samples. The success rate of each of the sample is fairly close. However my sample sizes are also quite large. If I check the CDF of the binomial distribution to see how different it is from the first (where I'm assuming the first is the null test) I get a very small probability that the second could be achieved. In Excel: 1-BINOM.DIST(1556,2455,61.2%,TRUE) = 0.012 However, this does not take into account any variance of the first result, it just assumes the first result is the test probability. Is there a better way to test if these two samples of data are actually statistically different from one another? | The solution is a simple google away: http://en.wikipedia.org/wiki/Statistical_hypothesis_testing So you would like to test the following null hypothesis against the given alternative $H_0:p_1=p_2$ versus $H_A:p_1\neq p_2$ So you just need to calculate the test statistic which is $$z=\frac{\hat p_1-\hat p_2}{\sqrt{\hat p(1-\hat p)\left(\frac{1}{n_1}+\frac{1}{n_2}\right)}}$$ where $\hat p=\frac{n_1\hat p_1+n_2\hat p_2}{n_1+n_2}$ . So now, in your problem, $\hat p_1=.634$ , $\hat p_2=.612$ , $n_1=2455$ and $n_2=2730.$ Once you calculate the test statistic, you just need to calculate the corresponding critical region value to compare your test statistic too. For example, if you are testing this hypothesis at the 95% confidence level then you need to compare the absolute value of your test statistic against the critical region value of $z_{\alpha/2}=1.96$ (for this two tailed test). Now, if $|z|>z_{\alpha/2}$ then you may reject the null hypothesis, otherwise you must fail to reject the null hypothesis. Well this solution works for the case when you are comparing two groups, but it does not generalize to the case where you want to compare 3 groups. You could however use a Chi Squared test to test if all three groups have equal proportions as suggested by @Eric in his comment above: " Does this question help? stats.stackexchange.com/questions/25299/ … – Eric" | {
"source": [
"https://stats.stackexchange.com/questions/113602",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27974/"
]
} |
113,618 | I am facing two problems while using caret package in R. I am reproducing an example below: library(mlbench)
library(caret)
set.seed(998)
data(Sonar) #Random data, just for illustration purpose
Sonar= Sonar[, 1:6] #Selected first 6 columsn only for showing an example. I am assuming V6 to be response.
head(Sonar)
inTraining <- createDataPartition(Sonar$V6, p = 0.75, list = FALSE)
training <- Sonar[inTraining, ]
testing <- Sonar[-inTraining, ]
modelFit <- train( V6~.,data=training, method="rpart" )
varImp(modelFit) a. How to extract top three (3) variables from varImp output? I tried to order the variables but for any reason, its not working for me. b. Also, why the following code doesn't work for "randomForest"? modelFit <- train( V6~.,data=training, method="rf" )
varImp(modelFit)
> varImp(modelFit)
Rerun with Debug
Error in varImp[, "%IncMSE"] : subscript out of bounds | What is the issue with #1? It runs fine for me and the result of the call to varImp() produces the following, ordered most to least important: > varImp(modelFit)
rpart variable importance
Overall
V5 100.000
V4 38.390
V3 38.362
V2 5.581
V1 0.000 EDIT Based on Question clarification: I am sure there are better ways, but here is how I might do it: ImpMeasure<-data.frame(varImp(modelFit)$importance)
ImpMeasure$Vars<-row.names(ImpMeasure)
ImpMeasure[order(-ImpMeasure$Overall),][1:3,] Regarding #2, you need to add importance=TRUE in order to tell randomForest to calculate them. > modelFit <- train( V6~.,data=training, method="rf" ,importance = TRUE)
> varImp(modelFit)
rf variable importance
Overall
V5 100.000
V3 22.746
V2 21.136
V4 3.797
V1 0.000 | {
"source": [
"https://stats.stackexchange.com/questions/113618",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31017/"
]
} |
114,027 | Various hypothesis tests, such as the $\chi^{2}$ GOF test, Kolmogorov-Smirnov, Anderson-Darling, etc., follow this basic format: $H_0$: The data follow the given distribution. $H_1$: The data do not follow the given distribution. Typically, one assesses the claim that some given data follows some given distribution, and if one rejects $H_0$, the data is not a good fit for the given distribution at some $\alpha$ level. But what if we don't reject $H_0$? I've always been taught that one cannot "accept" $H_0$, so basically, we do not evidence to reject $H_0$. That is, there is no evidence that we reject that the data follow the given distribution. Thus, my question is, what is the point of performing such testing if we can't conclude whether or not the data follow a given distribution? | Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguishable from the null at any given sample size. Here's two distributions, a standard normal (green solid line), and a similar-looking one (90% standard normal, and 10% standardized beta(2,2), marked with a red dashed line): The red one is not normal. At say $n=100$, we have little chance of spotting the difference, so we can't assert that data are drawn from a normal distribution -- what if it were from a non-normal distribution like the red one instead? Smaller fractions of standardized betas with equal but larger parameters would be much harder to see as different from a normal. But given that real data are almost never from some simple distribution, if we had a perfect oracle (or effectively infinite sample sizes), we would essentially always reject the hypothesis that the data were from some simple distributional form. As George Box famously put it , " All models are wrong, but some are useful. " Consider, for example, testing normality. It may be that the data actually come from something close to a normal, but will they ever be exactly normal? They probably never are. Instead, the best you can hope for with that form of testing is the situation you describe. (See, for example, the post Is normality testing essentially useless? , but there are a number of other posts here that make related points) This is part of the reason I often suggest to people that the question they're actually interested in (which is often something nearer to 'are my data close enough to distribution $F$ that I can make suitable inferences on that basis?') is usually not well-answered by goodness-of-fit testing. In the case of normality, often the inferential procedures they wish to apply (t-tests, regression etc) tend to work quite well in large samples - often even when the original distribution is fairly clearly non-normal -- just when a goodness of fit test will be very likely to reject normality . It's little use having a procedure that is most likely to tell you that your data are non-normal just when the question doesn't matter. Consider the image above again. The red distribution is non-normal, and with a really large sample we could reject a test of normality based on a sample from it ... but at a much smaller sample size, regressions and two sample t-tests (and many other tests besides) will behave so nicely as to make it pointless to even worry about that non-normality even a little. Similar considerations extend not only to other distributions, but largely, to a large amount of hypothesis testing more generally (even a two-tailed test of $\mu=\mu_0$ for example). One might as well ask the same kind of question - what is the point of performing such testing if we can't conclude whether or not the mean takes a particular value? You might be able to specify some particular forms of deviation and look at something like equivalence testing, but it's kind of tricky with goodness of fit because there are so many ways for a distribution to be close to but different from a hypothesized one, and different forms of difference can have different impacts on the analysis. If the alternative is a broader family that includes the null as a special case, equivalence testing makes more sense (testing exponential against gamma, for example) -- and indeed, the "two one-sided test" approach carries through, and that might be a way to formalize "close enough" (or it would be if the gamma model were true, but in fact would itself be virtually certain to be rejected by an ordinary goodness of fit test, if only the sample size were sufficiently large ). Goodness of fit testing (and often more broadly, hypothesis testing) is really only suitable for a fairly limited range of situations. The question people usually want to answer is not so precise, but somewhat more vague and harder to answer -- but as John Tukey said, " Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise. " Reasonable approaches to answering the more vague question may include simulation and resampling investigations to assess the sensitivity of the desired analysis to the assumption you are considering, compared to other situations that are also reasonably consistent with the available data. (It's also part of the basis for the approach to robustness via $\varepsilon$-contamination -- essentially by looking at the impact of being within a certain distance in the Kolmogorov-Smirnov sense) | {
"source": [
"https://stats.stackexchange.com/questions/114027",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/46427/"
]
} |
114,152 | In this current article in SCIENCE the following is being proposed: Suppose you randomly divide 500 million in income among 10,000
people. There's only one way to give everyone an equal, 50,000 share.
So if you're doling out earnings randomly, equality is extremely
unlikely. But there are countless ways to give a few people a lot of
cash and many people a little or nothing. In fact, given all the ways
you could divvy out income, most of them produce an exponential
distribution of income. I have done this with the following R code which seems to reaffirm the result: library(MASS)
w <- 500000000 #wealth
p <- 10000 #people
d <- diff(c(0,sort(runif(p-1,max=w)),w)) #wealth-distribution
h <- hist(d, col="red", main="Exponential decline", freq = FALSE, breaks = 45, xlim = c(0, quantile(d, 0.99)))
fit <- fitdistr(d,"exponential")
curve(dexp(x, rate = fit$estimate), col = "black", type="p", pch=16, add = TRUE) My question How can I analytically prove that the resulting distribution is indeed exponential? Addendum Thank you for your answers and comments. I have thought about the problem and came up with the following intuitive reasoning. Basically the following happens (Beware: oversimplification ahead): You kind of go along the amount and toss a (biased) coin. Every time you get e.g. heads you divide the amount. You distribute the resulting partitions. In the discrete case the coin tossing follows a binomial distribution, the partitions are geometrically distributed. The continuous analogues are the poisson distribution and the exponential distribution respectively! (By the same reasoning it also becomes intuitively clear why the geometrical and the exponential distribution have the property of memorylessness - because the coin doesn't have a memory either). | To make the problem simpler, let's consider the case where the allowed values of the share of each person is discrete, e.g., integers. Equivalently, one can also imagine partitioning the "income axis" into equally spaced intervals and approximating all values falling into a given interval by the midpoint. Denoting the total income as $X$, the $s$-th allowed value as $x_{s}$, the total number of people as $N$, and finally, the number of people with shares of $x_{s}$ as $n_{s}$, the following conditions should be satisfied:
\begin{equation}
C_{1} (\{n_{s}\})\equiv\sum_{s} n_{s} - N = 0,
\end{equation}
and
\begin{equation}
C_{2} (\{n_{s}\})\equiv \sum_{s} n_{s} x_{s} - X = 0.
\end{equation} Notice that many different ways to divide the share can represent the same distribution. For example, if we considered dividing \$4 between two people, giving \$3 to Alice and \$1 to Bob and vice versa would both give identical distributions. As the division is random, the distribution with the maximum number of corresponding ways to divide the share has the best chance to occur. To obtain such a distribution, one has to maximize
\begin{equation}
W(\{n_{s}\}) \equiv \frac{N!}{\prod_{s} n_{s}!},
\end{equation}
under the two constraints given above. The method of Lagrange multipliers is a canonical approach for this. Furthermore, one can choose to work with $\ln W$ instead of $W$ itself, as "$\ln$" is a monotone increasing function. That is,
\begin{equation}
\frac{\partial \ln W}{\partial n_{s}} = \lambda_{1} \frac{\partial C_{1}}{\partial n_{s}} +
\lambda_{2} \frac{\partial C_{1}}{\partial n_{s}} = \lambda_{1} + \lambda_{2} x_{s},
\end{equation}
where $\lambda_{1,2}$ are Lagrange multipliers. Notice that according to Stirling's formula ,
\begin{equation}
\ln n! \approx n\ln n - n,
\end{equation}
leading to
\begin{equation}
\frac{d\ln n!}{dn} \approx \ln n.
\end{equation}
Thus,
\begin{equation}
\frac{\partial \ln W}{\partial n_{s}} \approx -\ln n_{s}.
\end{equation}
It then follows that
\begin{equation}
n_{s} \approx \exp\big(-\lambda_{1} - \lambda_{2} x_{s}\big),
\end{equation}
which is an exponential distribution. One can obtain the values of Lagrange multipliers using the constraints. From the first constraint,
\begin{equation}
\begin{split}
N &= \sum_{s} n_{s} \approx \sum_{s} \exp\big(-\lambda_{1} - \lambda_{2} x_{s}\big)\\
&\approx \frac{1}{\Delta x} \int_{0}^{\infty} \exp\big(-\lambda_{1} - \lambda_{2} x\big) \,\,dx\\
&=\frac{1}{\lambda_{2}\Delta x}\exp\big(-\lambda_{1}\big),
\end{split}
\end{equation}
where $\Delta x$ is the spacing between allowed values. Similarly,
\begin{equation}
\begin{split}
X &= \sum_{s} n_{s}x_{s} \approx \sum_{s} x_{s}\,\exp\big(-\lambda_{1} - \lambda_{2} x_{s}\big)\\
&\approx \frac{1}{\Delta x} \int_{0}^{\infty} x\,\exp\big(-\lambda_{1} - \lambda_{2} x\big) \,\,dx\\
&=\frac{1}{\lambda_{2}^{2}\,\Delta x}\exp\big(-\lambda_{1}\big).
\end{split}
\end{equation}
Therefore, we have
\begin{equation}
\exp\big(-\lambda_{1}\big) = \frac{N^{2} \Delta x}{X},
\end{equation}
and
\begin{equation}
\lambda_{2} = \frac{N}{X}.
\end{equation}
That this is really a maximum, rather than a minimum or a saddle point, can be seen from the Hessian of $\ln W - \lambda_{1} C_{1} - \lambda_{2} C_{2}$. Because $C_{1,2}$ are linear in $n_{s}$, it is the same as that of $\ln W$:
\begin{equation}
\frac{\partial^{2} \ln W}{\partial n_{s}^{2}} = -\frac{1}{n_{s}} < 0,
\end{equation}
and
\begin{equation}
\frac{\partial^{2} \ln W}{\partial n_{s}\partial n_{r}} = 0 \quad (s\neq r).
\end{equation}
Hence the Hessian is concave, and what we have found is indeed a maximum. The function $W(\{n_{s}\})$ is really the distribution of distributions. For distributions we typically observe to be close to the most probable one, $W(\{n_{s}\})$ should be narrow enough. It is seen from the Hessian that this condition amounts to $n_{s}\gg 1$. (It is also the condition that Stirling's formula is reliable.) Therefore, to actually see the exponential distribution, partitions in the income axis (corresponding to bins in OP's histogram) should be wide enough so that number of people in a partition is much greater than unity. Towards the tail, where $n_{s}$ tends to zero, this condition is always destined to fail. Note: This is exactly how physicists understand the Boltzmann distribution in statistical mechanics. The exponential distribution is essentially exact for this case, as we consider $N\sim 10^{23}$. | {
"source": [
"https://stats.stackexchange.com/questions/114152",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/230/"
]
} |
114,385 | Recently I have been reading about deep learning and I am confused about the terms (or say technologies). What is the difference between Convolutional neural networks (CNN), Restricted Boltzmann machines (RBM) and Auto-encoders? | Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units . E.g. in a network like this: output[i] has edge back to input[i] for every i . Typically, number of hidden units is much less then number of visible (input/output) ones. As a result, when you pass data through such a network, it first compresses (encodes) input vector to "fit" in a smaller representation, and then tries to reconstruct (decode) it back. The task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation (encoding) for input data. RBM shares similar idea, but uses stochastic approach. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sample hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error. Intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other (more on this example may be found here ). Convolutional Neural Networks are somewhat similar to these two, but instead of learning single global weight matrix between two layers, they aim to find a set of locally connected neurons. CNNs are mostly used in image recognition. Their name comes from "convolution" operator or simply "filter". In short, filters are an easy way to perform complex operation by means of simple change of a convolution kernel. Apply Gaussian blur kernel and you'll get it smoothed. Apply Canny kernel and you'll see all edges. Apply Gabor kernel to get gradient features. (image from here ) The goal of convolutional neural networks is not to use one of predefined kernels, but instead to learn data-specific kernels . The idea is the same as with autoencoders or RBMs - translate many low-level features (e.g. user reviews or image pixels) to the compressed high-level representation (e.g. film genres or edges) - but now weights are learned only from neurons that are spatially close to each other. All three models have their use cases, pros and cons, but probably the most important properties are: Autoencoders are simplest ones. They are intuitively understandable, easy to implement and to reason about (e.g. it's much easier to find good meta-parameters for them than for RBMs). RBMs are generative. That is, unlike autoencoders that only discriminate some data vectors in favour of others, RBMs can also generate new data with given joined distribution. They are also considered more feature-rich and flexible. CNNs are very specific model that is mostly used for very specific task (though pretty popular task). Most of the top-level algorithms in image recognition are somehow based on CNNs today, but outside that niche they are hardly applicable (e.g. what's the reason to use convolution for film review analysis?). UPD. Dimensionality reduction When we represent some object as a vector of $n$ elements, we say that this is a vector in $n$-dimensional space. Thus, dimensionality reduction refers to a process of refining data in such a way, that each data vector $x$ is translated into another vector $x'$ in an $m$-dimensional space (vector with $m$ elements), where $m < n$. Probably the most common way of doing this is PCA . Roughly speaking, PCA finds "internal axes" of a dataset (called "components") and sorts them by their importance. First $m$ most important components are then used as new basis. Each of these components may be thought of as a high-level feature, describing data vectors better than original axes. Both - autoencoders and RBMs - do the same thing. Taking a vector in $n$-dimensional space they translate it into an $m$-dimensional one, trying to keep as much important information as possible and, at the same time, remove noise. If training of autoencoder/RBM was successful, each element of resulting vector (i.e. each hidden unit) represents something important about the object - shape of an eyebrow in an image, genre of a film, field of study in scientific article, etc. You take lots of noisy data as an input and produce much less data in a much more efficient representation. Deep architectures So, if we already had PCA, why the hell did we come up with autoencoders and RBMs? It turns out that PCA only allows linear transformation of a data vectors. That is, having $m$ principal components $c_1..c_m$, you can represent only vectors $x=\sum_{i=1}^{m}w_ic_i$. This is pretty good already, but not always enough. No matter, how many times you will apply PCA to a data - relationship will always stay linear. Autoencoders and RBMs, on other hand, are non-linear by the nature, and thus, they can learn more complicated relations between visible and hidden units. Moreover, they can be stacked , which makes them even more powerful. E.g. you train RBM with $n$ visible and $m$ hidden units, then you put another RBM with $m$ visible and $k$ hidden units on top of the first one and train it too, etc. And exactly the same way with autoencoders. But you don't just add new layers. On each layer you try to learn best possible representation for a data from the previous one: On the image above there's an example of such a deep network. We start with ordinary pixels, proceed with simple filters, then with face elements and finally end up with entire faces! This is the essence of deep learning . Now note, that at this example we worked with image data and sequentially took
larger and larger areas of spatially close pixels. Doesn't it sound similar? Yes, because it's an example of deep convolutional network. Be it based on autoencoders or RBMs, it uses convolution to stress importance of locality. That's why CNNs are somewhat distinct from autoencoders and RBMs. Classification None of models mentioned here work as classification algorithms per se. Instead, they are used for pretraining - learning transformations from low-level and hard-to-consume representation (like pixels) to a high-level one. Once deep (or maybe not that deep) network is pretrained, input vectors are transformed to a better representation and resulting vectors are finally passed to real classifier (such as SVM or logistic regression). In an image above it means that at the very bottom there's one more component that actually does classification. | {
"source": [
"https://stats.stackexchange.com/questions/114385",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
114,397 | The histogram of my dependent variable is as following: I draw the scatter plot of my dependent variable and independent variable, and the result is as following picture? I am wondering if the skewness of the dependent variable may affect the result. It seems that the relationship is not linear and we have a lot of outliers! | Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units . E.g. in a network like this: output[i] has edge back to input[i] for every i . Typically, number of hidden units is much less then number of visible (input/output) ones. As a result, when you pass data through such a network, it first compresses (encodes) input vector to "fit" in a smaller representation, and then tries to reconstruct (decode) it back. The task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation (encoding) for input data. RBM shares similar idea, but uses stochastic approach. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sample hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error. Intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other (more on this example may be found here ). Convolutional Neural Networks are somewhat similar to these two, but instead of learning single global weight matrix between two layers, they aim to find a set of locally connected neurons. CNNs are mostly used in image recognition. Their name comes from "convolution" operator or simply "filter". In short, filters are an easy way to perform complex operation by means of simple change of a convolution kernel. Apply Gaussian blur kernel and you'll get it smoothed. Apply Canny kernel and you'll see all edges. Apply Gabor kernel to get gradient features. (image from here ) The goal of convolutional neural networks is not to use one of predefined kernels, but instead to learn data-specific kernels . The idea is the same as with autoencoders or RBMs - translate many low-level features (e.g. user reviews or image pixels) to the compressed high-level representation (e.g. film genres or edges) - but now weights are learned only from neurons that are spatially close to each other. All three models have their use cases, pros and cons, but probably the most important properties are: Autoencoders are simplest ones. They are intuitively understandable, easy to implement and to reason about (e.g. it's much easier to find good meta-parameters for them than for RBMs). RBMs are generative. That is, unlike autoencoders that only discriminate some data vectors in favour of others, RBMs can also generate new data with given joined distribution. They are also considered more feature-rich and flexible. CNNs are very specific model that is mostly used for very specific task (though pretty popular task). Most of the top-level algorithms in image recognition are somehow based on CNNs today, but outside that niche they are hardly applicable (e.g. what's the reason to use convolution for film review analysis?). UPD. Dimensionality reduction When we represent some object as a vector of $n$ elements, we say that this is a vector in $n$-dimensional space. Thus, dimensionality reduction refers to a process of refining data in such a way, that each data vector $x$ is translated into another vector $x'$ in an $m$-dimensional space (vector with $m$ elements), where $m < n$. Probably the most common way of doing this is PCA . Roughly speaking, PCA finds "internal axes" of a dataset (called "components") and sorts them by their importance. First $m$ most important components are then used as new basis. Each of these components may be thought of as a high-level feature, describing data vectors better than original axes. Both - autoencoders and RBMs - do the same thing. Taking a vector in $n$-dimensional space they translate it into an $m$-dimensional one, trying to keep as much important information as possible and, at the same time, remove noise. If training of autoencoder/RBM was successful, each element of resulting vector (i.e. each hidden unit) represents something important about the object - shape of an eyebrow in an image, genre of a film, field of study in scientific article, etc. You take lots of noisy data as an input and produce much less data in a much more efficient representation. Deep architectures So, if we already had PCA, why the hell did we come up with autoencoders and RBMs? It turns out that PCA only allows linear transformation of a data vectors. That is, having $m$ principal components $c_1..c_m$, you can represent only vectors $x=\sum_{i=1}^{m}w_ic_i$. This is pretty good already, but not always enough. No matter, how many times you will apply PCA to a data - relationship will always stay linear. Autoencoders and RBMs, on other hand, are non-linear by the nature, and thus, they can learn more complicated relations between visible and hidden units. Moreover, they can be stacked , which makes them even more powerful. E.g. you train RBM with $n$ visible and $m$ hidden units, then you put another RBM with $m$ visible and $k$ hidden units on top of the first one and train it too, etc. And exactly the same way with autoencoders. But you don't just add new layers. On each layer you try to learn best possible representation for a data from the previous one: On the image above there's an example of such a deep network. We start with ordinary pixels, proceed with simple filters, then with face elements and finally end up with entire faces! This is the essence of deep learning . Now note, that at this example we worked with image data and sequentially took
larger and larger areas of spatially close pixels. Doesn't it sound similar? Yes, because it's an example of deep convolutional network. Be it based on autoencoders or RBMs, it uses convolution to stress importance of locality. That's why CNNs are somewhat distinct from autoencoders and RBMs. Classification None of models mentioned here work as classification algorithms per se. Instead, they are used for pretraining - learning transformations from low-level and hard-to-consume representation (like pixels) to a high-level one. Once deep (or maybe not that deep) network is pretrained, input vectors are transformed to a better representation and resulting vectors are finally passed to real classifier (such as SVM or logistic regression). In an image above it means that at the very bottom there's one more component that actually does classification. | {
"source": [
"https://stats.stackexchange.com/questions/114397",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24587/"
]
} |
114,610 | What is the relationship between $Y$ and $X$ in the following plot?
In my view there is negative linear relationship, But because we have a lot of outliers, the relationship is very weak. Am I right?
I want to learn how can we explain scatterplots. | The question deals with several concepts: how to evaluate data given only in the form of a scatterplot, how to summarize a scatterplot, and whether (and to what degree) a relationship looks linear. Let's take them in order. Evaluating graphical data Use principles of exploratory data analysis (EDA). These (at least originally, when they were developed for pencil-and-paper use) emphasize simple, easy-to-compute, robust summaries of data. One of the very simplest kinds of summaries is based on positions within a set of numbers, such as the middle value, which describes a "typical" value. Middles are easy to estimate reliably from graphics. Scatterplots exhibit pairs of numbers. The first of each pair (as plotted on the horizontal axis) gives a set of single numbers, which we could summarize separately. In this particular scatterplot, the y-values appear to lie within two almost completely separate groups : the values above $60$ at the top and those equal to or less than $60$ at the bottom. (This impression is confirmed by drawing a histogram of the y-values, which is sharply bimodal, but that would be a lot of work at this stage.) I invite sceptics to squint at the scatterplot. When I do--using a large-radius, gamma-corrected Gaussian blur (that is, a standard rapid image processing result) of the dots in the scatterplot I see this: The two groups--upper and lower--are pretty apparent. (The upper group is much lighter than the lower because it contains many fewer dots.) Accordingly, let's summarize the groups of y-values separately. I will do that by drawing horizontal lines at the medians of the two groups. In order to emphasize the impression of the data and to show we're not doing any kind of computation, I have (a) removed all decorations like axes and gridlines and (b) blurred the points. Little information about the patterns in the data is lost by thus "squinting" at the graphic: Similarly, I have attempted to mark the medians of the x-values with vertical line segments. In the upper group (red lines) you can check--by counting the blobs--that these lines do actually separate the group into two equal halves, both horizontally and vertically. In the lower group (blue lines) I have only visually estimated the positions without actually doing any counting. Assessing Relationships: Regression The points of intersection are the centers of the two groups. One excellent summary of the relationship among the x and y values would be to report these central positions. One would then want to supplement this summary by a description of how much the data are spread in each group--to the left and right, above and below--around their centers. For brevity, I won't do that here, but note that (roughly) the lengths of the line segments I have drawn reflect the overall spreads of each group. Finally, I drew a (dashed) line connecting the two centers. This is a reasonable regression line. Is it a good description of the data? Certainly not: look how spread out the data are around this line. Is it even evidence of linearity? That's scarcely relevant because the linear description is so poor. Nevertheless, because that is the question before us, let's address it. Evaluating Linearity A relationship is linear in a statistical sense when either the y values vary in a balanced random fashion around a line or the x values are seen to vary in a balanced random fashion around a line (or both). The former does not appear to be the case here: because the y values seem to fall into two groups, their variation is never going to look balanced in the sense of being roughly symmetrically distributed above or below the line. (That immediately rules out the possibility of dumping the data into a linear regression package and performing a least squares fit of y against x: the answers would not be relevant.) What about variation in x? That is more plausible: at each height on the plot, the horizontal scatter of points around the dotted line is pretty balanced. The spread in this scatter seems to be a little bit greater at lower heights (low y values), but maybe that's because there are many more points there. (The more random data you have, the wider apart their extreme values will tend to be.) Moreover, as we scan from top to bottom, there are no places where the horizontal scatter around the regression line is strongly unbalanced: that would be evidence of non-linearity. (Well, maybe around y=50 or so there may be too many large x values. This subtle effect could be taken as further evidence for breaking the data into two groups around the y=60 value.) Conclusions We have seen that It makes sense to view x as a linear function of y plus some "nice" random variation. It does not make sense to view y as a linear function of x plus random variation. A regression line can be estimated by separating the data into a group of high y values and a group of low y values, finding the centers of both groups by using medians, and connecting those centers. The resulting line has a downward slope, indicating a negative linear relationship. There are no strong departures from linearity. Nevertheless, because the spreads of the x-values around the line are still large (compared to the overall spread of the x-values to begin with), we would have to characterize this negative linear relationship as "very weak." It might be more useful to describe the data as forming two oval-shaped clouds (one for y above 60 and another for lower values of y). Within each cloud there is little detectable relationship between x and y. The centers of the clouds are near (0.29, 90) and (0.38, 30). The clouds have comparable spreads, but the upper cloud has far fewer data than the lower one (maybe 20% as much). Two of these conclusions confirm those made in the question itself that there is a weak negative relationship. The others supplement and support those conclusions. One conclusion drawn in the question that does not seem to hold up is the assertion that there are "outliers." A more careful examination (as sketched below) will fail to turn up any individual points, or even small groups of points, that validly could be considered outlying. After sufficiently long analysis, one's attention might be drawn to the two points near the middle right or the one point at the lower left corner, but even these are not going to change one's assessment of the data very much, whether or not they are considered outlying. Further Directions Much more could be said. The next steps would be to assess the spreads of those clouds. The relationships between x and y within each of the two clouds could be evaluated separately, using the same techniques shown here. The slight asymmetry of the lower cloud (more data seem to appear at the smallest y values) could be evaluated and even adjusted by re-expressing the y values (a square root might work well). At this stage it would make sense to look for outlying data, because at this point the description would include information about typical data values as well as their spreads; outliers (by definition) would be too far from the middle to be explained in terms of the observed amount of spreading. None of this work--which is quite quantitative--requires much more than finding middles of groups of data and doing some simple computations with them, and therefore can be done quickly and accurately even when the data are available only in graphical form. Every result reported here--including the quantitative values--could easily be found within a few seconds using a display system (such as hardcopy and a pencil :-)) which permits one to make light marks on top of the graphic. | {
"source": [
"https://stats.stackexchange.com/questions/114610",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24587/"
]
} |
114,630 | Take these two vectors: points <- c(44, 36, 33, 33, 29, 28, 28, 22, 21, 20, 18, 15, 15, 15, 12, 12, 12, 11, 10, 10, 8, 8, 8, 8, 7, 7, 7, 6, 6, 6, 6, 6, 5, 5, 4, 3, 3, 2, 2, 2, 2, 1, 1, 1, 1, 1, 1, 1)
hours <- c(137.000000, 58.450000, 92.250000, 94.750000, 80.000000, 100.000000, 33.750000, 48.500000, 70.500000, 76.500000, 43.250000, 90.750000, 33.000000, 13.750000, 16.250000, 27.250000, 30.830000, 12.000000, 68.000000, 34.000000, 13.250000, 32.250000, 9.750000, 4.083333, 18.000000, 23.750000, 4.666667, 31.750000, 11.750000, 4.850000, 10.866667, 4.166667, 14.000000, 6.166667, 2.000000, 7.750000, 11.100000, 4.750000, 1.750000, 0.250000, 1.000000, 12.000000, 13.000000, 1.000000, 0.250000, 5.250000, 5.333333, 2.166667) I made this model: lm(points ~ hours)
# Call:
# lm(formula = points ~ hours)
# Coefficients:
# (Intercept) hours
# 2.9530 0.2823 The model suggests that if I do 1 hour, I get 3.2353 points. I have also calculated mean points per hour: mean(points/hours)
# 0.839875 The ratio suggests that if I do 1 hour, I get 0.839875 points. Why is there such a big difference in the number of points per hour predicted by each method? Or am I interpreting something wrong? | The question deals with several concepts: how to evaluate data given only in the form of a scatterplot, how to summarize a scatterplot, and whether (and to what degree) a relationship looks linear. Let's take them in order. Evaluating graphical data Use principles of exploratory data analysis (EDA). These (at least originally, when they were developed for pencil-and-paper use) emphasize simple, easy-to-compute, robust summaries of data. One of the very simplest kinds of summaries is based on positions within a set of numbers, such as the middle value, which describes a "typical" value. Middles are easy to estimate reliably from graphics. Scatterplots exhibit pairs of numbers. The first of each pair (as plotted on the horizontal axis) gives a set of single numbers, which we could summarize separately. In this particular scatterplot, the y-values appear to lie within two almost completely separate groups : the values above $60$ at the top and those equal to or less than $60$ at the bottom. (This impression is confirmed by drawing a histogram of the y-values, which is sharply bimodal, but that would be a lot of work at this stage.) I invite sceptics to squint at the scatterplot. When I do--using a large-radius, gamma-corrected Gaussian blur (that is, a standard rapid image processing result) of the dots in the scatterplot I see this: The two groups--upper and lower--are pretty apparent. (The upper group is much lighter than the lower because it contains many fewer dots.) Accordingly, let's summarize the groups of y-values separately. I will do that by drawing horizontal lines at the medians of the two groups. In order to emphasize the impression of the data and to show we're not doing any kind of computation, I have (a) removed all decorations like axes and gridlines and (b) blurred the points. Little information about the patterns in the data is lost by thus "squinting" at the graphic: Similarly, I have attempted to mark the medians of the x-values with vertical line segments. In the upper group (red lines) you can check--by counting the blobs--that these lines do actually separate the group into two equal halves, both horizontally and vertically. In the lower group (blue lines) I have only visually estimated the positions without actually doing any counting. Assessing Relationships: Regression The points of intersection are the centers of the two groups. One excellent summary of the relationship among the x and y values would be to report these central positions. One would then want to supplement this summary by a description of how much the data are spread in each group--to the left and right, above and below--around their centers. For brevity, I won't do that here, but note that (roughly) the lengths of the line segments I have drawn reflect the overall spreads of each group. Finally, I drew a (dashed) line connecting the two centers. This is a reasonable regression line. Is it a good description of the data? Certainly not: look how spread out the data are around this line. Is it even evidence of linearity? That's scarcely relevant because the linear description is so poor. Nevertheless, because that is the question before us, let's address it. Evaluating Linearity A relationship is linear in a statistical sense when either the y values vary in a balanced random fashion around a line or the x values are seen to vary in a balanced random fashion around a line (or both). The former does not appear to be the case here: because the y values seem to fall into two groups, their variation is never going to look balanced in the sense of being roughly symmetrically distributed above or below the line. (That immediately rules out the possibility of dumping the data into a linear regression package and performing a least squares fit of y against x: the answers would not be relevant.) What about variation in x? That is more plausible: at each height on the plot, the horizontal scatter of points around the dotted line is pretty balanced. The spread in this scatter seems to be a little bit greater at lower heights (low y values), but maybe that's because there are many more points there. (The more random data you have, the wider apart their extreme values will tend to be.) Moreover, as we scan from top to bottom, there are no places where the horizontal scatter around the regression line is strongly unbalanced: that would be evidence of non-linearity. (Well, maybe around y=50 or so there may be too many large x values. This subtle effect could be taken as further evidence for breaking the data into two groups around the y=60 value.) Conclusions We have seen that It makes sense to view x as a linear function of y plus some "nice" random variation. It does not make sense to view y as a linear function of x plus random variation. A regression line can be estimated by separating the data into a group of high y values and a group of low y values, finding the centers of both groups by using medians, and connecting those centers. The resulting line has a downward slope, indicating a negative linear relationship. There are no strong departures from linearity. Nevertheless, because the spreads of the x-values around the line are still large (compared to the overall spread of the x-values to begin with), we would have to characterize this negative linear relationship as "very weak." It might be more useful to describe the data as forming two oval-shaped clouds (one for y above 60 and another for lower values of y). Within each cloud there is little detectable relationship between x and y. The centers of the clouds are near (0.29, 90) and (0.38, 30). The clouds have comparable spreads, but the upper cloud has far fewer data than the lower one (maybe 20% as much). Two of these conclusions confirm those made in the question itself that there is a weak negative relationship. The others supplement and support those conclusions. One conclusion drawn in the question that does not seem to hold up is the assertion that there are "outliers." A more careful examination (as sketched below) will fail to turn up any individual points, or even small groups of points, that validly could be considered outlying. After sufficiently long analysis, one's attention might be drawn to the two points near the middle right or the one point at the lower left corner, but even these are not going to change one's assessment of the data very much, whether or not they are considered outlying. Further Directions Much more could be said. The next steps would be to assess the spreads of those clouds. The relationships between x and y within each of the two clouds could be evaluated separately, using the same techniques shown here. The slight asymmetry of the lower cloud (more data seem to appear at the smallest y values) could be evaluated and even adjusted by re-expressing the y values (a square root might work well). At this stage it would make sense to look for outlying data, because at this point the description would include information about typical data values as well as their spreads; outliers (by definition) would be too far from the middle to be explained in terms of the observed amount of spreading. None of this work--which is quite quantitative--requires much more than finding middles of groups of data and doing some simple computations with them, and therefore can be done quickly and accurately even when the data are available only in graphical form. Every result reported here--including the quantitative values--could easily be found within a few seconds using a display system (such as hardcopy and a pencil :-)) which permits one to make light marks on top of the graphic. | {
"source": [
"https://stats.stackexchange.com/questions/114630",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12492/"
]
} |
114,632 | I'm reading a paper where the statistic reported for testing the general significance of a multiple linear regression is a "Wald Chi Square" instead of the usual "F" statistic. Is it the same thing to use the Chi Square statistic or the F statistic or in what occasions are the two used? | The question deals with several concepts: how to evaluate data given only in the form of a scatterplot, how to summarize a scatterplot, and whether (and to what degree) a relationship looks linear. Let's take them in order. Evaluating graphical data Use principles of exploratory data analysis (EDA). These (at least originally, when they were developed for pencil-and-paper use) emphasize simple, easy-to-compute, robust summaries of data. One of the very simplest kinds of summaries is based on positions within a set of numbers, such as the middle value, which describes a "typical" value. Middles are easy to estimate reliably from graphics. Scatterplots exhibit pairs of numbers. The first of each pair (as plotted on the horizontal axis) gives a set of single numbers, which we could summarize separately. In this particular scatterplot, the y-values appear to lie within two almost completely separate groups : the values above $60$ at the top and those equal to or less than $60$ at the bottom. (This impression is confirmed by drawing a histogram of the y-values, which is sharply bimodal, but that would be a lot of work at this stage.) I invite sceptics to squint at the scatterplot. When I do--using a large-radius, gamma-corrected Gaussian blur (that is, a standard rapid image processing result) of the dots in the scatterplot I see this: The two groups--upper and lower--are pretty apparent. (The upper group is much lighter than the lower because it contains many fewer dots.) Accordingly, let's summarize the groups of y-values separately. I will do that by drawing horizontal lines at the medians of the two groups. In order to emphasize the impression of the data and to show we're not doing any kind of computation, I have (a) removed all decorations like axes and gridlines and (b) blurred the points. Little information about the patterns in the data is lost by thus "squinting" at the graphic: Similarly, I have attempted to mark the medians of the x-values with vertical line segments. In the upper group (red lines) you can check--by counting the blobs--that these lines do actually separate the group into two equal halves, both horizontally and vertically. In the lower group (blue lines) I have only visually estimated the positions without actually doing any counting. Assessing Relationships: Regression The points of intersection are the centers of the two groups. One excellent summary of the relationship among the x and y values would be to report these central positions. One would then want to supplement this summary by a description of how much the data are spread in each group--to the left and right, above and below--around their centers. For brevity, I won't do that here, but note that (roughly) the lengths of the line segments I have drawn reflect the overall spreads of each group. Finally, I drew a (dashed) line connecting the two centers. This is a reasonable regression line. Is it a good description of the data? Certainly not: look how spread out the data are around this line. Is it even evidence of linearity? That's scarcely relevant because the linear description is so poor. Nevertheless, because that is the question before us, let's address it. Evaluating Linearity A relationship is linear in a statistical sense when either the y values vary in a balanced random fashion around a line or the x values are seen to vary in a balanced random fashion around a line (or both). The former does not appear to be the case here: because the y values seem to fall into two groups, their variation is never going to look balanced in the sense of being roughly symmetrically distributed above or below the line. (That immediately rules out the possibility of dumping the data into a linear regression package and performing a least squares fit of y against x: the answers would not be relevant.) What about variation in x? That is more plausible: at each height on the plot, the horizontal scatter of points around the dotted line is pretty balanced. The spread in this scatter seems to be a little bit greater at lower heights (low y values), but maybe that's because there are many more points there. (The more random data you have, the wider apart their extreme values will tend to be.) Moreover, as we scan from top to bottom, there are no places where the horizontal scatter around the regression line is strongly unbalanced: that would be evidence of non-linearity. (Well, maybe around y=50 or so there may be too many large x values. This subtle effect could be taken as further evidence for breaking the data into two groups around the y=60 value.) Conclusions We have seen that It makes sense to view x as a linear function of y plus some "nice" random variation. It does not make sense to view y as a linear function of x plus random variation. A regression line can be estimated by separating the data into a group of high y values and a group of low y values, finding the centers of both groups by using medians, and connecting those centers. The resulting line has a downward slope, indicating a negative linear relationship. There are no strong departures from linearity. Nevertheless, because the spreads of the x-values around the line are still large (compared to the overall spread of the x-values to begin with), we would have to characterize this negative linear relationship as "very weak." It might be more useful to describe the data as forming two oval-shaped clouds (one for y above 60 and another for lower values of y). Within each cloud there is little detectable relationship between x and y. The centers of the clouds are near (0.29, 90) and (0.38, 30). The clouds have comparable spreads, but the upper cloud has far fewer data than the lower one (maybe 20% as much). Two of these conclusions confirm those made in the question itself that there is a weak negative relationship. The others supplement and support those conclusions. One conclusion drawn in the question that does not seem to hold up is the assertion that there are "outliers." A more careful examination (as sketched below) will fail to turn up any individual points, or even small groups of points, that validly could be considered outlying. After sufficiently long analysis, one's attention might be drawn to the two points near the middle right or the one point at the lower left corner, but even these are not going to change one's assessment of the data very much, whether or not they are considered outlying. Further Directions Much more could be said. The next steps would be to assess the spreads of those clouds. The relationships between x and y within each of the two clouds could be evaluated separately, using the same techniques shown here. The slight asymmetry of the lower cloud (more data seem to appear at the smallest y values) could be evaluated and even adjusted by re-expressing the y values (a square root might work well). At this stage it would make sense to look for outlying data, because at this point the description would include information about typical data values as well as their spreads; outliers (by definition) would be too far from the middle to be explained in terms of the observed amount of spreading. None of this work--which is quite quantitative--requires much more than finding middles of groups of data and doing some simple computations with them, and therefore can be done quickly and accurately even when the data are available only in graphical form. Every result reported here--including the quantitative values--could easily be found within a few seconds using a display system (such as hardcopy and a pencil :-)) which permits one to make light marks on top of the graphic. | {
"source": [
"https://stats.stackexchange.com/questions/114632",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/32093/"
]
} |
114,763 | I use mostly "Gaussian distribution" in my book, but someone just suggested I switch to "normal distribution". Any consensus on which term to use for beginners? Of course the two terms are synonyms , so this is not a question about substance, but purely a matter of which term is more commonly used. And of course I use both terms. But which should be used mostly? | Even though I tend to say 'normal' more often (since that's what I was taught when first learning), I think "Gaussian" is a better choice, as long as students/readers are quite familiar with both terms: The normal isn't particularly typical, so the name is itself misleading. It certainly plays an important role (not least because of the CLT), but observed data is much less often particularly near Gaussian than is sometimes suggested. The word (and associated words like "normalize") has several meanings that can be relevant in statistics (consider "orthonormal basis" for example). If someone says "I normalized my sample" I can't tell for sure if they transformed to normality, computed z-scores, scaled the vector to unit length, to length $\sqrt{n}$, or a number of other possibilities. If we tended to call the distribution "Gaussian" at least the first option is eliminated and something more descriptive replaces it. Gauss at least has a reasonable degree of claim to the distribution. | {
"source": [
"https://stats.stackexchange.com/questions/114763",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25/"
]
} |
115,011 | According to a text that I'm using, the formula for the variance of the $i^{th}$ residual is given by: $\sigma^2\left ( 1-\frac{1}{n}-\frac{(x_{i}-\overline{x})^2}{S_{xx}} \right )$ I find this hard to believe since the $i^{th}$ residual is the difference between the $i^{th}$ observed value and the $i^{th}$ fitted value; if one were to compute the variance of the difference, at the very least I would expect some "pluses" in the resulting expression. Any help in understanding the derivation would be appreciated. | The intuition about the "plus" signs related to the variance (from the fact that even when we calculate the variance of a difference of independent random variables, we add their variances) is correct but fatally incomplete: if the random variables involved are not independent, then covariances are also involved -and covariances may be negative. There exists an expression that is almost like the expression in the question was thought that it "should" be by the OP (and me), and it is the variance of the prediction error , denote it $e^0 = y^0 - \hat y^0$, where $y^0 = \beta_0+\beta_1x^0+u^0$: $$\text{Var}(e^0) = \sigma^2\cdot \left(1 + \frac 1n + \frac {(x^0-\bar x)^2}{S_{xx}}\right)$$ The critical difference between the variance of the prediction error and the variance of the estimation error (i.e. of the residual), is that the error term of the predicted observation is not correlated with the estimator , since the value $y^0$ was not used in constructing the estimator and calculating the estimates, being an out-of-sample value. The algebra for both proceeds in exactly the same way up to a point (using $^0$ instead of $_i$), but then diverges. Specifically: In the simple linear regression $y_i = \beta_0 + \beta_1x_i + u_i$, $\text{Var}(u_i)=\sigma^2$, the variance of the estimator $\hat \beta = (\hat \beta_0, \hat \beta_1)'$ is still $$\text{Var}(\hat \beta) = \sigma^2 \left(\mathbf X' \mathbf X\right)^{-1}$$ We have $$\mathbf X' \mathbf X= \left[ \begin{matrix}
n & \sum x_i\\
\sum x_i & \sum x_i^2 \end{matrix}\right]$$ and so $$\left(\mathbf X' \mathbf X\right)^{-1}= \left[ \begin{matrix}
\sum x_i^2 & -\sum x_i\\
-\sum x_i & n \end{matrix}\right]\cdot \left[n\sum x_i^2-\left(\sum x_i\right)^2\right]^{-1}$$ We have $$\left[n\sum x_i^2-\left(\sum x_i\right)^2\right] = \left[n\sum x_i^2-n^2\bar x^2\right] = n\left[\sum x_i^2-n\bar x^2\right] \\= n\sum (x_i^2-\bar x^2) \equiv nS_{xx}$$ So $$\left(\mathbf X' \mathbf X\right)^{-1}= \left[ \begin{matrix}
(1/n)\sum x_i^2 & -\bar x\\
-\bar x & 1 \end{matrix}\right]\cdot (1/S_{xx})$$ which means that $$\text{Var}(\hat \beta_0) = \sigma^2\left(\frac 1n\sum x_i^2\right)\cdot \ (1/S_{xx}) = \frac {\sigma^2}{n}\frac{S_{xx}+n\bar x^2} {S_{xx}} = \sigma^2\left(\frac 1n + \frac{\bar x^2} {S_{xx}}\right) $$ $$\text{Var}(\hat \beta_1) = \sigma^2(1/S_{xx}) $$ $$\text{Cov}(\hat \beta_0,\hat \beta_1) = -\sigma^2(\bar x/S_{xx}) $$ The $i$-th residual is defined as $$\hat u_i = y_i - \hat y_i = (\beta_0 - \hat \beta_0) + (\beta_1 - \hat \beta_1)x_i +u_i$$ The actual coefficients are treated as constants, the regressor is fixed (or conditional on it), and has zero covariance with the error term, but the estimators are correlated with the error term, because the estimators contain the dependent variable, and the dependent variable contains the error term. So we have $$\text{Var}(\hat u_i) = \Big[\text{Var}(u_i)+\text{Var}(\hat \beta_0)+x_i^2\text{Var}(\hat \beta_1)+2x_i\text{Cov}(\hat \beta_0,\hat \beta_1)\Big] + 2\text{Cov}([(\beta_0 - \hat \beta_0) + (\beta_1 - \hat \beta_1)x_i],u_i) $$ $$=\Big[\sigma^2 + \sigma^2\left(\frac 1n + \frac{\bar x^2} {S_{xx}}\right) + x_i^2\sigma^2(1/S_{xx}) +2\text{Cov}([(\beta_0 - \hat \beta_0) + (\beta_1 - \hat \beta_1)x_i],u_i)$$ Pack it up a bit to obtain $$\text{Var}(\hat u_i)=\left[\sigma^2\cdot \left(1 + \frac 1n + \frac {(x_i-\bar x)^2}{S_{xx}}\right)\right]+ 2\text{Cov}([(\beta_0 - \hat \beta_0) + (\beta_1 - \hat \beta_1)x_i],u_i)$$ The term in the big parenthesis has exactly the same structure with the variance of the prediction error, with the only change being that instead of $x_i$ we will have $x^0$ (and the variance will be that of $e^0$ and not of $\hat u_i$). The last covariance term is zero for the prediction error because $y^0$ and hence $u^0$ is not included in the estimators, but not zero for the estimation error because $y_i$ and hence $u_i$ is part of the sample and so it is included in the estimator. We have $$2\text{Cov}([(\beta_0 - \hat \beta_0) + (\beta_1 - \hat \beta_1)x_i],u_i) = 2E\left([(\beta_0 - \hat \beta_0) + (\beta_1 - \hat \beta_1)x_i]u_i\right)$$ $$=-2E\left(\hat \beta_0u_i\right)-2x_iE\left(\hat \beta_1u_i\right) = -2E\left([\bar y -\hat \beta_1 \bar x]u_i\right)-2x_iE\left(\hat \beta_1u_i\right)$$ the last substitution from how $\hat \beta_0$ is calculated. Continuing, $$...=-2E(\bar yu_i) -2(x_i-\bar x)E\left(\hat \beta_1u_i\right) = -2\frac {\sigma^2}{n} -2(x_i-\bar x)E\left[\frac {\sum(x_i-\bar x)(y_i-\bar y)}{S_{xx}}u_i\right]$$ $$=-2\frac {\sigma^2}{n} -2\frac {(x_i-\bar x)}{S_{xx}}\left[ \sum(x_i-\bar x)E(y_iu_i-\bar yu_i)\right]$$ $$=-2\frac {\sigma^2}{n} -2\frac {(x_i-\bar x)}{S_{xx}}\left[ -\frac {\sigma^2}{n}\sum_{j\neq i}(x_j-\bar x) + (x_i-\bar x)\sigma^2(1-\frac 1n)\right]$$ $$=-2\frac {\sigma^2}{n}-2\frac {(x_i-\bar x)}{S_{xx}}\left[ -\frac {\sigma^2}{n}\sum(x_i-\bar x) + (x_i-\bar x)\sigma^2\right]$$ $$=-2\frac {\sigma^2}{n}-2\frac {(x_i-\bar x)}{S_{xx}}\left[ 0 + (x_i-\bar x)\sigma^2\right] = -2\frac {\sigma^2}{n}-2\sigma^2\frac {(x_i-\bar x)^2}{S_{xx}}$$ Inserting this into the expression for the variance of the residual, we obtain $$\text{Var}(\hat u_i)=\sigma^2\cdot \left(1 - \frac 1n - \frac {(x_i-\bar x)^2}{S_{xx}}\right)$$ So hats off to the text the OP is using. (I have skipped some algebraic manipulations, no wonder OLS algebra is taught less and less these days...) SOME INTUITION So it appears that what works "against" us (larger variance) when predicting, works "for us" (lower variance) when estimating. This is a good starting point for one to ponder why an excellent fit may be a bad sign for the prediction abilities of the model (however counter-intuitive this may sound...). The fact that we are estimating the expected value of the regressor, decreases the variance by $1/n$. Why? because by estimating , we "close our eyes" to some error-variability existing in the sample,since we essentially estimating an expected value. Moreover, the larger the deviation of an observation of a regressor from the regressor's sample mean, the smaller the variance of the residual associated with this observation will be... the more deviant the observation, the less deviant its residual... It is variability of the regressors that works for us, by "taking the place" of the unknown error-variability. But that's good for estimation . For prediction , the same things turn against us: now, by not taking into account, however imperfectly, the variability in $y^0$ (since we want to predict it), our imperfect estimators obtained from the sample show their weaknesses: we estimated the sample mean, we don't know the true expected value -the variance increases. We have an $x^0$ that is far away from the sample mean as calculated from the other observations -too bad, our prediction error variance gets another boost, because the predicted $\hat y^0$ will tend to go astray... in more scientific language "optimal predictors in the sense of reduced prediction error variance, represent a shrinkage towards the mean of the variable under prediction". We do not try to replicate the dependent variable's variability -we just try to stay "close to the average". | {
"source": [
"https://stats.stackexchange.com/questions/115011",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55545/"
]
} |
115,090 | We’ve run a mixed effects logistic regression using the following syntax; # fit model
fm0 <- glmer(GoalEncoding ~ 1 + Group + (1|Subject) + (1|Item), exp0,
family = binomial(link="logit"))
# model output
summary(fm0) Subject and Item are the random effects. We’re getting an odd result which is the coefficient and standard deviation for the subject term are both zero; Generalized linear mixed model fit by maximum likelihood (Laplace
Approximation) [glmerMod]
Family: binomial ( logit )
Formula: GoalEncoding ~ 1 + Group + (1 | Subject) + (1 | Item)
Data: exp0
AIC BIC logLik deviance df.resid
449.8 465.3 -220.9 441.8 356
Scaled residuals:
Min 1Q Median 3Q Max
-2.115 -0.785 -0.376 0.805 2.663
Random effects:
Groups Name Variance Std.Dev.
Subject (Intercept) 0.000 0.000
Item (Intercept) 0.801 0.895
Number of obs: 360, groups: Subject, 30; Item, 12
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.0275 0.2843 -0.1 0.92
GroupGeMo.EnMo 1.2060 0.2411 5.0 5.7e-07 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Correlation of Fixed Effects:
(Intr)
GroupGM.EnM -0.002 This should not be happening because obviously there is variation across subjects. When we run the same analysis in stata xtmelogit goal group_num || _all:R.subject || _all:R.item
Note: factor variables specified; option laplace assumed
Refining starting values:
Iteration 0: log likelihood = -260.60631
Iteration 1: log likelihood = -252.13724
Iteration 2: log likelihood = -249.87663
Performing gradient-based optimization:
Iteration 0: log likelihood = -249.87663
Iteration 1: log likelihood = -246.38421
Iteration 2: log likelihood = -245.2231
Iteration 3: log likelihood = -240.28537
Iteration 4: log likelihood = -238.67047
Iteration 5: log likelihood = -238.65943
Iteration 6: log likelihood = -238.65942
Mixed-effects logistic regression Number of obs = 450
Group variable: _all Number of groups = 1
Obs per group: min = 450
avg = 450.0
max = 450
Integration points = 1 Wald chi2(1) = 22.62
Log likelihood = -238.65942 Prob > chi2 = 0.0000
------------------------------------------------------------------------------
goal | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
group_num | 1.186594 .249484 4.76 0.000 .6976147 1.675574
_cons | -3.419815 .8008212 -4.27 0.000 -4.989396 -1.850234
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
_all: Identity |
sd(R.subject) | 7.18e-07 .3783434 0 .
-----------------------------+------------------------------------------------
_all: Identity |
sd(R.trial) | 2.462568 .6226966 1.500201 4.042286
------------------------------------------------------------------------------
LR test vs. logistic regression: chi2(2) = 126.75 Prob > chi2 = 0.0000
Note: LR test is conservative and provided only for reference.
Note: log-likelihood calculations are based on the Laplacian approximation. the results are as expected with a non-zero coefficient / s.e. for the Subject term. Originally we thought this might be something to do with the coding of the Subject term, but changing this from a string to an integer did not make any difference. Obviously the analysis is not working properly, but we are unable to pin down the source of the difficulties. (NB someone else on this forum has been experiencing a similar issue, but this thread remains unanswered link to question ) | This is discussed at some length at https://bbolker.github.io/mixedmodels-misc/glmmFAQ.html (search for "singular models"); it's common, especially when there is a small number of groups (although 30 is not particularly small in this context). One difference between lme4 and many other packages is that many packages, including lme4 's predecessor nlme , handle the fact that variance estimates must be non-negative by fitting variance on the log scale: that means that variance estimates can't be exactly zero, just very very small. lme4 , in contrast, uses constrained optimization, so it can return values that are exactly zero (see http://arxiv.org/abs/1406.5823 p. 24 for more discussion). http://rpubs.com/bbolker/6226 gives an example. In particular, looking closely at your among-subject variance results from Stata, you have an estimate of 7.18e-07 (relative to an intercept of -3.4) with a Wald standard deviation of .3783434 (essentially useless in this case!) and a 95% CI listed as "0"; this is technically "non-zero", but it's as close to zero as the program will report ... It's well known and theoretically provable (e.g. Stram and Lee Biometrics 1994) that the null distribution for variance components is a mixture of a point mass ('spike') at zero and a chi-squared distribution away from zero. Unsurprisingly (but I don't know if it's proven/well known), the sampling distribution of the variance component estimates often has a spike at zero even when the true value is not zero -- see e.g. http://rpubs.com/bbolker/4187 for an example, or the last example in the ?bootMer page: library(lme4)
library(boot)
## Check stored values from a longer (1000-replicate) run:
load(system.file("testdata","boo01L.RData",package="lme4"))
plot(boo01L,index=3) | {
"source": [
"https://stats.stackexchange.com/questions/115090",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24109/"
]
} |
115,258 | Are there any reference document(s) that give a comprehensive list of activation functions in neural networks along with their pros/cons (and ideally some pointers to publications where they were successful or not so successful)? | I'll start making a list here of the ones I've learned so far. As @marcodena said, pros and cons are more difficult because it's mostly just heuristics learned from trying these things, but I figure at least having a list of what they are can't hurt. First, I'll define notation explicitly so there is no confusion: Notation This notation is from Neilsen's book . A Feedforward Neural Network is a many layers of neurons connected together. It takes in an input, then that input "trickles" through the network and the neural network returns an output vector. More formally, call $a^i_j$ the activation (aka output) of the $j^{th}$ neuron in the $i^{th}$ layer, where $a^1_j$ is the $j^{th}$ element in the input vector. Then we can relate the next layer's input to it's previous via the following relation: $$a^i_j = \sigma\bigg(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j\bigg)$$ where $\sigma$ is the activation function, $w^i_{jk}$ is the weight from the $k^{th}$ neuron in the $(i-1)^{th}$ layer to the $j^{th}$ neuron in the $i^{th}$ layer, $b^i_j$ is the bias of the $j^{th}$ neuron in the $i^{th}$ layer, and $a^i_j$ represents the activation value of the $j^{th}$ neuron in the $i^{th}$ layer. Sometimes we write $z^i_j$ to represent $\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j$, in other words, the activation value of a neuron before applying the activation function. For more concise notation we can write $$a^i = \sigma(w^i \times a^{i-1} + b^i)$$ To use this formula to compute the output of a feedforward network for some input $I \in \mathbb{R}^n$, set $a^1 = I$, then compute $a^2, a^3, \ldots, a^m$, where $m$ is the number of layers. Activation Functions (in the following, we will write $\exp(x)$ instead of $e^x$ for readability) Identity Also known as a linear activation function. $$a^i_j = \sigma(z^i_j) = z^i_j$$ Step $$a^i_j = \sigma(z^i_j) = \begin{cases} 0 & \text{if } z^i_j < 0 \\ 1 & \text{if } z^i_j > 0 \end{cases}$$ Piecewise Linear Choose some $x_{\min}$ and $x_{\max}$, which is our "range". Everything less than than this range will be 0, and everything greater than this range will be 1. Anything else is linearly-interpolated between. Formally: $$a^i_j = \sigma(z^i_j) = \begin{cases} 0 & \text{if } z^i_j < x_{\min} \\ m z^i_j+b & \text{if } x_{\min} \leq z^i_j \leq x_{\max} \\ 1 & \text{if } z^i_j > x_{\max} \end{cases}$$ Where $$m = \frac{1}{x_{\max}-x_{\min}}$$ and $$b = -m x_{\min} = 1 - m x_{\max}$$ Sigmoid $$a^i_j = \sigma(z^i_j) = \frac{1}{1+\exp(-z^i_j)}$$ Complementary log-log $$a^i_j = \sigma(z^i_j) = 1 − \exp\!\big(−\exp(z^i_j)\big)$$ Bipolar $$a^i_j = \sigma(z^i_j) = \begin{cases} -1 & \text{if } z^i_j < 0 \\ \ \ \ 1 & \text{if } z^i_j > 0 \end{cases}$$ Bipolar Sigmoid $$a^i_j = \sigma(z^i_j) = \frac{1-\exp(-z^i_j)}{1+\exp(-z^i_j)}$$ Tanh $$a^i_j = \sigma(z^i_j) = \tanh(z^i_j)$$ LeCun's Tanh See Efficient Backprop .
$$a^i_j = \sigma(z^i_j) = 1.7159 \tanh\!\left( \frac{2}{3} z^i_j\right)$$ Scaled: Hard Tanh $$a^i_j = \sigma(z^i_j) = \max\!\big(-1, \min(1, z^i_j)\big)$$ Absolute $$a^i_j = \sigma(z^i_j) = \mid z^i_j \mid$$ Rectifier Also known as Rectified Linear Unit (ReLU), Max, or the Ramp Function . $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)$$ Modifications of ReLU These are some activation functions that I have been playing with that seem to have very good performance for MNIST for mysterious reasons. $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)+\cos(z^i_j)$$ Scaled: $$a^i_j = \sigma(z^i_j) = \max(0, z^i_j)+\sin(z^i_j)$$ Scaled: Smooth Rectifier Also known as Smooth Rectified Linear Unit, Smooth Max, or Soft plus $$a^i_j = \sigma(z^i_j) = \log\!\big(1+\exp(z^i_j)\big)$$ Logit $$a^i_j = \sigma(z^i_j) = \log\!\bigg(\frac{z^i_j}{(1 − z^i_j)}\bigg)$$ Scaled: Probit $$a^i_j = \sigma(z^i_j) = \sqrt{2}\,\text{erf}^{-1}(2z^i_j-1)$$. Where $\text{erf}$ is the Error Function . It can't be described via elementary functions, but you can find ways of approximating it's inverse at that Wikipedia page and here . Alternatively, it can be expressed as $$a^i_j = \sigma(z^i_j) = \phi(z^i_j)$$. Where $\phi $is the Cumulative distribution function (CDF). See here for means of approximating this. Scaled: Cosine See Random Kitchen Sinks . $$a^i_j = \sigma(z^i_j) = \cos(z^i_j)$$. Softmax Also known as the Normalized Exponential.
$$a^i_j = \frac{\exp(z^i_j)}{\sum\limits_k \exp(z^i_k)}$$ This one is a little weird because the output of a single neuron is dependent on the other neurons in that layer. It also does get difficult to compute, as $z^i_j$ may be a very high value, in which case $\exp(z^i_j)$ will probably overflow. Likewise, if $z^i_j$ is a very low value, it will underflow and become $0$. To combat this, we will instead compute $\log(a^i_j)$. This gives us: $$\log(a^i_j) = \log\left(\frac{\exp(z^i_j)}{\sum\limits_k \exp(z^i_k)}\right)$$ $$\log(a^i_j) = z^i_j - \log(\sum\limits_k \exp(z^i_k))$$ Here we need to use the log-sum-exp trick : Let's say we are computing: $$\log(e^2 + e^9 + e^{11} + e^{-7} + e^{-2} + e^5)$$ We will first sort our exponentials by magnitude for convenience: $$\log(e^{11} + e^9 + e^5 + e^2 + e^{-2} + e^{-7})$$ Then, since $e^{11}$ is our highest, we multiply by $\frac{e^{-11}}{e^{-11}}$: $$\log(\frac{e^{-11}}{e^{-11}}(e^{11} + e^9 + e^5 + e^2 + e^{-2} + e^{-7}))$$ $$\log(\frac{1}{e^{-11}}(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18}))$$ $$\log(e^{11}(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18}))$$ $$\log(e^{11}) + \log(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18})$$ $$ 11 + \log(e^{0} + e^{-2} + e^{-6} + e^{-9} + e^{-13} + e^{-18})$$ We can then compute the expression on the right and take the log of it. It's okay to do this because that sum is very small with respect to $\log(e^{11})$, so any underflow to 0 wouldn't have been significant enough to make a difference anyway. Overflow can't happen in the expression on the right because we are guaranteed that after multiplying by $e^{-11}$, all the powers will be $\leq 0$. Formally, we call $m=\max(z^i_1, z^i_2, z^i_3, ...)$. Then: $$\log\!(\sum\limits_k \exp(z^i_k)) = m + \log(\sum\limits_k \exp(z^i_k - m))$$ Our softmax function then becomes: $$a^i_j = \exp(\log(a^i_j))=\exp\!\left( z^i_j - m - \log(\sum\limits_k \exp(z^i_k - m))\right)$$ Also as a sidenote, the derivative of the softmax function is: $$\frac{d \sigma(z^i_j)}{d z^i_j}=\sigma^{\prime}(z^i_j)= \sigma(z^i_j)(1 - \sigma(z^i_j))$$ Maxout This one is also a little tricky. Essentially the idea is that we break up each neuron in our maxout layer into lots of sub-neurons, each of which have their own weights and biases. Then the input to a neuron goes to each of it's sub-neurons instead, and each sub-neuron simply outputs their $z$'s (without applying any activation function). The $a^i_j$ of that neuron is then the max of all its sub-neuron's outputs. Formally, in a single neuron, say we have $n$ sub-neurons. Then $$a^i_j = \max\limits_{k \in [1,n]} s^i_{jk}$$ where $$s^i_{jk} = a^{i-1} \bullet w^i_{jk} + b^i_{jk}$$ ($\bullet$ is the dot product ) To help us think about this, consider the weight matrix $W^i$ for the $i^{\text{th}}$ layer of a neural network that is using, say, a sigmoid activation function. $W^i$ is a 2D matrix, where each column $W^i_j$ is a vector for neuron $j$ containing a weight for every neuron in the the previous layer $i-1$. If we're going to have sub-neurons, we're going to need a 2D weight matrix for each neuron, since each sub-neuron will need a vector containing a weight for every neuron in the previous layer. This means that $W^i$ is now a 3D weight matrix, where each $W^i_j$ is the 2D weight matrix for a single neuron $j$. And then $W^i_{jk}$ is a vector for sub-neuron $k$ in neuron $j$ that contains a weight for every neuron in the previous layer $i-1$. Likewise, in a neural network that is again using, say, a sigmoid activation function, $b^i$ is a vector with a bias $b^i_j$ for each neuron $j$ in layer $i$. To do this with sub-neurons, we need a 2D bias matrix $b^i$ for each layer $i$, where $b^i_j$ is the vector with a bias for $b^i_{jk}$ each subneuron $k$ in the $j^{\text{th}}$ neuron. Having a weight matrix $w^i_j$ and a bias vector $b^i_j$ for each neuron then makes the above expressions very clear, it's simply applying each sub-neuron's weights $w^i_{jk}$ to the outputs $a^{i-1}$ from layer $i-1$, then applying their biases $b^i_{jk}$ and taking the max of them. Radial Basis Function Networks Radial Basis Function Networks are a modification of Feedforward Neural Networks, where instead of using $$a^i_j=\sigma\bigg(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j\bigg)$$ we have one weight $w^i_{jk}$ per node $k$ in the previous layer (as normal), and also one mean vector $\mu^i_{jk}$ and one standard deviation vector $\sigma^i_{jk}$ for each node in the previous layer. Then we call our activation function $\rho$ to avoid getting it confused with the standard deviation vectors $\sigma^i_{jk}$. Now to compute $a^i_j$ we first need to compute one $z^i_{jk}$ for each node in the previous layer. One option is to use Euclidean distance: $$z^i_{jk}=\sqrt{\Vert(a^{i-1}-\mu^i_{jk}\Vert}=\sqrt{\sum\limits_\ell (a^{i-1}_\ell - \mu^i_{jk\ell})^2}$$ Where $\mu^i_{jk\ell}$ is the $\ell^\text{th}$ element of $\mu^i_{jk}$. This one does not use the $\sigma^i_{jk}$. Alternatively there is Mahalanobis distance, which supposedly performs better: $$z^i_{jk}=\sqrt{(a^{i-1}-\mu^i_{jk})^T \Sigma^i_{jk} (a^{i-1}-\mu^i_{jk})}$$ where $\Sigma^i_{jk}$ is the covariance matrix , defined as: $$\Sigma^i_{jk} = \text{diag}(\sigma^i_{jk})$$ In other words, $\Sigma^i_{jk}$ is the diagonal matrix with $\sigma^i_{jk}$ as it's diagonal elements. We define $a^{i-1}$ and $\mu^i_{jk}$ as column vectors here because that is the notation that is normally used. These are really just saying that Mahalanobis distance is defined as $$z^i_{jk}=\sqrt{\sum\limits_\ell \frac{(a^{i-1}_{\ell} - \mu^i_{jk\ell})^2}{\sigma^i_{jk\ell}}}$$ Where $\sigma^i_{jk\ell}$ is the $\ell^\text{th}$ element of $\sigma^i_{jk}$. Note that $\sigma^i_{jk\ell}$ must always be positive, but this is a typical requirement for standard deviation so this isn't that surprising. If desired, Mahalanobis distance is general enough that the covariance matrix $\Sigma^i_{jk}$ can be defined as other matrices. For example, if the covariance matrix is the identity matrix, our Mahalanobis distance reduces to the Euclidean distance. $\Sigma^i_{jk} = \text{diag}(\sigma^i_{jk})$ is pretty common though, and is known as normalized Euclidean distance . Either way, once our distance function has been chosen, we can compute $a^i_j$ via $$a^i_j=\sum\limits_k w^i_{jk}\rho(z^i_{jk})$$ In these networks they choose to multiply by weights after applying the activation function for reasons. This describes how to make a multi-layer Radial Basis Function network, however, usually there is only one of these neurons, and its output is the output of the network. It's drawn as multiple neurons because each mean vector $\mu^i_{jk}$ and each standard deviation vector $\sigma^i_{jk}$ of that single neuron is considered a one "neuron" and then after all of these outputs there is another layer that takes the sum of those computed values times the weights, just like $a^i_j$ above. Splitting it into two layers with a "summing" vector at the end seems odd to me, but it's what they do. Also see here . Radial Basis Function Network Activation Functions Gaussian $$\rho(z^i_{jk}) = \exp\!\big(-\frac{1}{2} (z^i_{jk})^2\big)$$ Multiquadratic Choose some point $(x, y)$. Then we compute the distance from $(z^i_j, 0)$ to $(x, y)$: $$\rho(z^i_{jk}) = \sqrt{(z^i_{jk}-x)^2 + y^2}$$ This is from Wikipedia . It isn't bounded, and can be any positive value, though I am wondering if there is a way to normalize it. When $y=0$, this is equivalent to absolute (with a horizontal shift $x$). Inverse Multiquadratic Same as quadratic, except flipped: $$\rho(z^i_{jk}) = \frac{1}{\sqrt{(z^i_{jk}-x)^2 + y^2}}$$ *Graphics from intmath's Graphs using SVG . | {
"source": [
"https://stats.stackexchange.com/questions/115258",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12359/"
]
} |
116,770 | Background: Note: My data set and R code are included below text I wish to use AIC to compare two mixed effects models generated using the lme4 package in R. Each model has one fixed effect and one random effect. The fixed effect differs between models, but the random effect remains the same between models. I've found that if I use REML=TRUE , model2 has the lower AIC score, but if I use REML=FALSE , model1 has the lower AIC score. Support for using ML: Zuur et al. (2009; p. 122) suggest that "To compare models with nested fixed effects (but with the same random structure), ML estimation must be used and not REML." This indicates to me that I ought to use ML since my random effects are the same in both models, but my fixed effects differ. [Zuur et al. 2009. Mixed Effect Models and Extensions in Ecology with R. Springer.] Support for using REML: However, I notice that when I use ML, the residual variance associated with the random effects differs between the two models ( model1 = 136.3; model2 = 112.9), but when I use REML, it is the same between models (model1=model2=151.5). This implies to me that I ought instead to use REML so that the random residual variance remains the same between models with the same random variable. Question: Doesn't it make more sense to use REML than ML for comparisons of models where the fixed effects change and the random effects remain the same? If not, can you explain why or point me to other literature that explains more? # Model2 "wins" if REML=TRUE:
REMLmodel1 = lmer(Response ~ Fixed1 + (1|Random1),data,REML = TRUE)
REMLmodel2 = lmer(Response ~ Fixed2 + (1|Random1),data,REML = TRUE)
AIC(REMLmodel1,REMLmodel2)
summary(REMLmodel1)
summary(REMLmodel2)
# Model1 "wins" if REML=FALSE:
MLmodel1 = lmer(Response ~ Fixed1 + (1|Random1),data,REML = FALSE)
MLmodel2 = lmer(Response ~ Fixed2 + (1|Random1),data,REML = FALSE)
AIC(MLmodel1,MLmodel2)
summary(MLmodel1)
summary(MLmodel2) Dataset: Response Fixed1 Fixed2 Random1
5.20 A A 1
32.50 A A 1
6.57 A A 2
24.77 A B 3
41.69 A B 3
34.29 A B 4
1.80 A B 4
10.00 A B 5
15.56 A B 5
4.44 A C 6
21.65 A C 6
9.20 A C 7
4.11 A C 7
12.52 B D 8
0.25 B D 8
27.34 B D 9
11.54 B E 10
0.86 B E 10
0.68 B E 11
4.00 B E 11 | Zuur et al., and Faraway (from @janhove's comment above) are right; using likelihood-based methods (including AIC) to compare two models with different fixed effects that are fitted by REML will generally lead to nonsense. Faraway (2006) Extending the linear model with R (p. 156): The reason is that REML estimates the random effects by considering linear combinations of the data that remove the fixed effects. If these fixed effects are changed, the likelihoods of the two models will not be directly comparable These two questions discuss the issue further: Allowed comparisons of mixed effects models (random effects primarily) ; REML vs ML stepAIC | {
"source": [
"https://stats.stackexchange.com/questions/116770",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/48024/"
]
} |
116,935 | Say we have to GLMMs mod1 <- glmer(y ~ x + A + (1|g), data = dat)
mod2 <- glmer(y ~ x + B + (1|g), data = dat) These models are not nested in the usual sense of: a <- glmer(y ~ x + A + (1|g), data = dat)
b <- glmer(y ~ x + A + B + (1|g), data = dat) so we can't do anova(mod1, mod2) as we would with anova(a ,b) . Can we use AIC to say which is the best model instead? | The AIC can be applied with non nested models. In fact, this is one of the most extended myths (misunderstandings?) about AIC. See: Akaike Information Criterion AIC MYTHS AND MISUNDERSTANDINGS One thing you have to be careful about is to include all the normalising constants, since these are different for the different (non-nested) models: See also: Non-nested model selection AIC for non-nested models: normalizing constant In the context of GLMM a more delicate question is how reliable is the AIC for comparing this sort of models (see also @BenBolker's). Other versions of the AIC are discussed and compared in the following paper: On the behaviour of marginal and conditional AIC in linear mixed models | {
"source": [
"https://stats.stackexchange.com/questions/116935",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17865/"
]
} |
117,052 | I have been trying to replicate the results of the Stata option robust in R. I have used the rlm command form the MASS package and also the command lmrob from the package "robustbase". In both cases the results are quite different from the "robust" option in Stata. Can anybody please suggest something in this context? Here are the results I obtained when I ran the robust option in Stata: . reg yb7 buildsqb7 no_bed no_bath rain_harv swim_pl pr_terrace, robust
Linear regression Number of obs = 4451
F( 6, 4444) = 101.12
Prob > F = 0.0000
R-squared = 0.3682
Root MSE = .5721
------------------------------------------------------------------------------
| Robust
yb7 | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
buildsqb7 | .0046285 .0026486 1.75 0.081 -.0005639 .009821
no_bed | .3633841 .0684804 5.31 0.000 .2291284 .4976398
no_bath | .0832654 .0706737 1.18 0.239 -.0552904 .2218211
rain_harv | .3337906 .0395113 8.45 0.000 .2563289 .4112524
swim_pl | .1627587 .0601765 2.70 0.007 .0447829 .2807346
pr_terrace | .0032754 .0178881 0.18 0.855 -.0317941 .0383449
_cons | 13.68136 .0827174 165.40 0.000 13.51919 13.84353 And this is what I obtained in R with the lmrob option: > modelb7<-lmrob(yb7~Buildsqb7+No_Bed+Rain_Harv+Swim_Pl+Gym+Pr_Terrace, data<-bang7)
> summary(modelb7)
Call:
lmrob(formula = yb7 ~ Buildsqb7 + No_Bed + Rain_Harv + Swim_Pl + Gym + Pr_Terrace,
data = data <- bang7)
\--> method = "MM"
Residuals:
Min 1Q Median 3Q Max
-51.03802 -0.12240 0.02088 0.18199 8.96699
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 12.648261 0.055078 229.641 <2e-16 ***
Buildsqb7 0.060857 0.002050 29.693 <2e-16 ***
No_Bed 0.005629 0.019797 0.284 0.7762
Rain_Harv 0.230816 0.018290 12.620 <2e-16 ***
Swim_Pl 0.065199 0.028121 2.319 0.0205 *
Gym 0.023024 0.014655 1.571 0.1162
Pr_Terrace 0.015045 0.013951 1.078 0.2809
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Robust residual standard error: 0.1678
Multiple R-squared: 0.8062, Adjusted R-squared: 0.8059 | Charles is nearly there in his answer, but robust option of the regress command (and other regression estimation commands) in Stata makes it possible to use multiple types of heteroskedasticity and autocorrelation robust variance-covariance matrix estimators, as does the coeftest function in the lmtest package, which in turn depends on the respective variance-covariance matrices produced by the vcovHC function in the sandwich package. However, the default variance-covariance matrices used by the two is different: 1. The default variance-covariance matrix returned by vcocHC is the so-called HC3 for reasons described in the man page for vcovHC . 2. The sandwich option used by Charles makes coeftest use the HC0 robust variance-covariance matrix. 3. To reproduce the Stata default behavior of using the robust option in a call to regress you need to request vcovHC to use the HC1 robust variance-covariance matrix. Read more about it here . The following example that demonstrates all the points made above is based on the example here . library(foreign)
library(sandwich)
library(lmtest)
dfAPI = read.dta("http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2.dta")
lmAPI = lm(api00 ~ acs_k3 + acs_46 + full + enroll, data= dfAPI)
summary(lmAPI) # non-robust
# check that "sandwich" returns HC0
coeftest(lmAPI, vcov = sandwich) # robust; sandwich
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC0")) # robust; HC0
# check that the default robust var-cov matrix is HC3
coeftest(lmAPI, vcov = vcovHC(lmAPI)) # robust; HC3
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC3")) # robust; HC3 (default)
# reproduce the Stata default
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC1")) # robust; HC1 (Stata default) The last line of code above reproduces results from Stata: use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
regress api00 acs_k3 acs_46 full enroll, robust | {
"source": [
"https://stats.stackexchange.com/questions/117052",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56579/"
]
} |
117,078 | For plotting with R, should I learn ggplot2 or ggvis? I don't necessarily want to learn both if one of them is superior in any regard. Why R community keeps creating new packages with overlapping functionalities? The introduction blog post does not mention a word why ggvis is created given that a sophisticated plotting package ggplot2 already exists. | Start with ggplot2. It creates static plots. Apart from static plots, ggvis can be used for creating interactive plots as well. Once you have learned the syntax of ggplot2, then the syntax for adding interactivity to create ggivs plots will follow naturally. | {
"source": [
"https://stats.stackexchange.com/questions/117078",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8457/"
]
} |
117,427 | I am confused about ZCA whitening and normal whitening (which is obtained by dividing principal components by the square roots of PCA eigenvalues). As far as I know, $$\mathbf x_\mathrm{ZCAwhite} = \mathbf U \mathbf x_\mathrm{PCAwhite},$$ where $\mathbf U$ are PCA eigenvectors. What are the uses of ZCA whitening? What are the differences between normal whitening and ZCA whitening? | Let your (centered) data be stored in a $n\times d$ matrix $\mathbf X$ with $d$ features (variables) in columns and $n$ data points in rows. Let the covariance matrix $\mathbf C=\mathbf X^\top \mathbf X/n$ have eigenvectors in columns of $\mathbf E$ and eigenvalues on the diagonal of $\mathbf D$, so that $\mathbf C = \mathbf E \mathbf D \mathbf E^\top$. Then what you call "normal" PCA whitening transformation is given by $\mathbf W_\mathrm{PCA} = \mathbf D^{-1/2} \mathbf E^\top$, see e.g. my answer in How to whiten the data using principal component analysis? However, this whitening transformation is not unique. Indeed, whitened data will stay whitened after any rotation, which means that any $\mathbf W = \mathbf R \mathbf W_\mathrm{PCA}$ with orthogonal matrix $\mathbf R$ will also be a whitening transformation. In what is called ZCA whitening, we take $\mathbf E$ (stacked together eigenvectors of the covariance matrix) as this orthogonal matrix, i.e. $$\mathbf W_\mathrm{ZCA} = \mathbf E \mathbf D^{-1/2} \mathbf E^\top = \mathbf C^{-1/2}.$$ One defining property of ZCA transformation ( sometimes also called "Mahalanobis transformation") is that it results in whitened data that is as close as possible to the original data (in the least squares sense). In other words, if you want to minimize $\|\mathbf X - \mathbf X \mathbf A^\top\|^2$ subject to $ \mathbf X \mathbf A^\top$ being whitened, then you should take $\mathbf A = \mathbf W_\mathrm{ZCA}$. Here is a 2D illustration: Left subplot shows the data and its principal axes. Note the dark shading in the upper-right corner of the distribution: it marks its orientation. Rows of $\mathbf W_\mathrm{PCA}$ are shown on the second subplot: these are the vectors the data is projected on. After whitening (below) the distribution looks round, but notice that it also looks rotated --- dark corner is now on the East side, not on the North-East side. Rows of $\mathbf W_\mathrm{ZCA}$ are shown on the third subplot (note that they are not orthogonal!). After whitening (below) the distribution looks round and it's oriented in the same way as originally. Of course, one can get from PCA whitened data to ZCA whitened data by rotating with $\mathbf E$. The term "ZCA" seems to have been introduced in Bell and Sejnowski 1996 in the context of independent component analysis, and stands for "zero-phase component analysis". See there for more details. Most probably, you came across this term in the context of image processing. It turns out, that when applied to a bunch of natural images (pixels as features, each image as a data point), principal axes look like Fourier components of increasing frequencies, see first column of their Figure 1 below. So they are very "global". On the other hand, rows of ZCA transformation look very "local", see the second column. This is precisely because ZCA tries to transform the data as little as possible, and so each row should better be close to one the original basis functions (which would be images with only one active pixel). And this is possible to achieve, because correlations in natural images are mostly very local (so de-correlation filters can also be local). Update More examples of ZCA filters and of images transformed with ZCA are given in Krizhevsky, 2009, Learning Multiple Layers of Features from Tiny Images , see also examples in @bayerj's answer (+1). I think these examples give an idea as to when ZCA whitening might be preferable to the PCA one. Namely, ZCA-whitened images still resemble normal images , whereas PCA-whitened ones look nothing like normal images. This is probably important for algorithms like convolutional neural networks (as e.g. used in Krizhevsky's paper), which treat neighbouring pixels together and so greatly rely on the local properties of natural images. For most other machine learning algorithms it should be absolutely irrelevant whether the data is whitened with PCA or ZCA. | {
"source": [
"https://stats.stackexchange.com/questions/117427",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
117,641 | Effects package provides a very fast and convenient way for plotting linear mixed effect model results obtained through lme4 package . The effect function calculates confidence intervals (CIs) very quickly, but how trustworthy are these confidence intervals? For example: library(lme4)
library(effects)
library(ggplot)
data(Pastes)
fm1 <- lmer(strength ~ batch + (1 | cask), Pastes)
effs <- as.data.frame(effect(c("batch"), fm1))
ggplot(effs, aes(x = batch, y = fit, ymin = lower, ymax = upper)) +
geom_rect(xmax = Inf, xmin = -Inf, ymin = effs[effs$batch == "A", "lower"],
ymax = effs[effs$batch == "A", "upper"], alpha = 0.5, fill = "grey") +
geom_errorbar(width = 0.2) + geom_point() + theme_bw() According to CIs calculated using effects package, batch "E" does not overlap with batch "A". If I try the same using confint.merMod function and the default method: a <- fixef(fm1)
b <- confint(fm1)
# Computing profile confidence intervals ...
# There were 26 warnings (use warnings() to see them)
b <- data.frame(b)
b <- b[-1:-2,]
b1 <- b[[1]]
b2 <- b[[2]]
dt <- data.frame(fit = c(a[1], a[1] + a[2:length(a)]),
lower = c(b1[1], b1[1] + b1[2:length(b1)]),
upper = c(b2[1], b2[1] + b2[2:length(b2)]) )
dt$batch <- LETTERS[1:nrow(dt)]
ggplot(dt, aes(x = batch, y = fit, ymin = lower, ymax = upper)) +
geom_rect(xmax = Inf, xmin = -Inf, ymin = dt[dt$batch == "A", "lower"],
ymax = dt[dt$batch == "A", "upper"], alpha = 0.5, fill = "grey") +
geom_errorbar(width = 0.2) + geom_point() + theme_bw() I see that all of the CIs overlap. I also get warnings indicating that the function failed to calculate trustworthy CIs. This example, and my actual dataset, makes me to suspect that effects package takes shortcuts in CI calculation that might not entirely be approved by statisticians. How trustworthy are the CIs returned by effect function from effects package for lmer objects? What have I tried: Looking into the source code, I noticed that effect function relies on Effect.merMod function, which in turn directs to Effect.mer function, which looks like this: effects:::Effect.mer
function (focal.predictors, mod, ...)
{
result <- Effect(focal.predictors, mer.to.glm(mod), ...)
result$formula <- as.formula(formula(mod))
result
}
<environment: namespace:effects> mer.to.glm function seems to calculate Variance-Covariate Matrix from the lmer object: effects:::mer.to.glm
function (mod)
{
...
mod2$vcov <- as.matrix(vcov(mod))
...
mod2
} This, in turn, is probably used in Effect.default function to calculate CIs (I might have misunderstood this part): effects:::Effect.default
...
z <- qnorm(1 - (1 - confidence.level)/2)
V <- vcov.(mod)
eff.vcov <- mod.matrix %*% V %*% t(mod.matrix)
rownames(eff.vcov) <- colnames(eff.vcov) <- NULL
var <- diag(eff.vcov)
result$vcov <- eff.vcov
result$se <- sqrt(var)
result$lower <- effect - z * result$se
result$upper <- effect + z * result$se
... I do not know enough about LMMs to judge whether this is a right approach, but considering the discussion around confidence interval calculation for LMMs, this approach appears suspiciously simple. | All of the results are essentially the same ( for this particular example ). Some theoretical differences are: as @rvl points out, your reconstruction of CIs without taking account of covariance among parameters is just wrong (sorry) confidence intervals for parameters can be based on Wald confidence intervals (assuming a quadratic log-likelihood surface): lsmeans , effects , confint(.,method="Wald") ; except for lsmeans , these methods ignore finite-size effects ("degrees of freedom"), but in this case it barely makes any difference ( df=40 is practically indistinguishable from infinite df ) ... or on profile confidence intervals (the default method; ignores finite-size effects but allows for non-quadratic surfaces) ... or on parametric bootstrapping (the gold standard -- assumes the model is correct [responses are Normal, random effects are Normally distributed, data are conditionally independent, etc.], but otherwise makes few assumptions) I think all of these approaches are reasonable (some are more approximate than others), but in this case it barely makes any difference which one you use. If you're concerned, try out several contrasting methods on your data, or on simulated data that resemble your own, and see what happens ... (PS: I wouldn't put too much weight on the fact that the confidence intervals of A and E don't overlap. You'd have to do a proper pairwise comparison procedure to make reliable inferences about the differences between this particular pair of estimates ...) 95% CIs: Comparison code: library(lme4)
fm2 <- lmer(strength ~ batch - 1 + (1 | cask), Pastes)
c0 <- confint(fm2,method="Wald")
c1 <- confint(fm2)
c2 <- confint(fm2,method="boot")
library(effects)
library(lsmeans)
c3 <- with(effect("batch",fm2),cbind(lower,upper))
c4 <- with(summary(lsmeans(fm2,spec="batch")),cbind(lower.CL,upper.CL))
tmpf <- function(method,val) {
data.frame(method=method,
v=LETTERS[1:10],
setNames(as.data.frame(tail(val,10)),
c("lwr","upr")))
}
library(ggplot2); theme_set(theme_bw())
allCI <- rbind(tmpf("lme4_wald",c0),
tmpf("lme4_prof",c1),
tmpf("lme4_boot",c2),
tmpf("effects",c3),
tmpf("lsmeans",c4))
ggplot(allCI,aes(v,ymin=lwr,ymax=upr,colour=method))+
geom_linerange(position=position_dodge(width=0.8))
ggsave("pastes_confint.png",width=10) | {
"source": [
"https://stats.stackexchange.com/questions/117641",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10829/"
]
} |
117,643 | I've been told that is beneficial to use stratified cross validation especially when response classes are unbalanced. If one purpose of cross-validation is to help account for the randomness of our original training data sample, surely making each fold have the same class distribution would be working against this unless you were sure your original training set had a representative class distribution. Is my logic flawed? EDIT I'm interested in whether this method damages the good of CV. I can see why it is necessary if you have a small sample/very unbalanced classes/both to avoid not having a single representative of the minor class in a fold. The paper Apples-to-Apples in Cross-Validation Studies:
Pitfalls in Classifier Performance Measurement puts forward the case for stratification well, but all arguments seem to amount to 'Stratification provides a safeguard and more consistency' but no safeguard would be required given enough data. Is the answer simply "We use it out of necessity as we rarely have enough data." ? | Bootstrapping seeks to simulate the effect of drawing a new sample from the population, and doesn't seek to ensure distinct test sets (residues after N from N sampling with replacement). RxK-fold Cross-validation ensures K distinct test folds but is then repeated R times for different random partitionings to allow independence assumptions to hold for K-CV, but this is lost with repetition. Stratified Cross-validation violates the principal that the test labels should never have been looked at before the statistics are calculated, but this is generally thought to be innocuous as the only effect is to balance the folds, but it does lead to loss of diversity (an unwanted loss of variance). It moves even further from the Boostrap idea of constructing a sample similar to what you'd draw naturally from the whole population. Arguably the main reason stratification is important is to address defects in the classification algorithms, as they are too easily biased by over- or under-representation of classes. An algorithm that uses balancing techniques (either by selection or weighting) or optimizes a chance-correct measure (Kappa or preferably Informedness) is less impacted by this, although even such algorithms can't learn or test a class that isn't there. Forcing each fold to have at least m instances of each class, for some small m, is an alternative to stratification that works for both Bootstrapping and CV. It does have a smoothing bias, making folds tend to be more balanced than they would otherwise be expected to be. Re ensembles and diversity: If classifiers learned on the training folds are used for fusion not just estimation of generalization error, the increasing rigidity of CV, stratified Bootstrap and stratified CV leads to loss of diversity, and potentially resilience, compared to Bootstrap, forced Bootstrap and forced CV. | {
"source": [
"https://stats.stackexchange.com/questions/117643",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/34653/"
]
} |
117,689 | I recently purchased a data science interview resource in which one of the probability questions was as follows: Given draws from a normal distribution with known parameters, how can you simulate draws from a uniform distribution? My original thought process was that, for a discrete random variable, we could break the normal distribution into K unique subsections where each subsection has an equal area under the normal curve. Then we could determine which of K values the variable takes by recognizing which area of the normal curve the variable ends up falling into. But this would only work for discrete random variables. I did some research into how we might do the same for continuous random variables, but unfortunately I could only find techniques like inverse transform sampling that would use as input a uniform random variable, and could output random variables from some other distribution. I was thinking that perhaps we could do this process in reverse to get uniform random variables? I also thought about possibly using the Normal random variables as inputs into a linear congruential generator, but I'm not sure if this would work. Any thoughts on how I might approach this question? | In the spirit of using simple algebraic calculations which are unrelated to computation of the Normal distribution , I would lean towards the following. They are ordered as I thought of them (and therefore needed to get more and more creative), but I have saved the best--and most surprising--to last. Reverse the Box-Mueller technique : from each pair of normals $(X,Y)$, two independent uniforms can be constructed as $\text{atan2}(Y,X)$ (on the interval $[-\pi, \pi]$) and $\exp(-(X^2+Y^2)/2)$ (on the interval $[0,1]$). Take the normals in groups of two and sum their squares to obtain a sequence of $\chi^2_2$ variates $Y_1, Y_2, \ldots, Y_i, \ldots$. The expressions obtained from the pairs $$X_i = \frac{Y_{2i}}{Y_{2i-1}+Y_{2i}}$$ will have a $\text{Beta}(1,1)$ distribution, which is uniform. That this requires only basic, simple arithmetic should be clear. Because the exact distribution of the Pearson correlation coefficient of a four-pair sample from a standard bivariate Normal distribution is uniformly distributed on $[-1,1]$, we may simply take the normals in groups of four pairs (that is, eight values in each set) and return the correlation coefficient of these pairs. (This involves simple arithmetic plus two square root operations.) It has been known since ancient times that a cylindrical projection of the sphere (a surface in three-space) is equal-area . This implies that in the projection of a uniform distribution on the sphere, both the horizontal coordinate (corresponding to longitude) and the vertical coordinate (corresponding to latitude) will have uniform distributions. Because the trivariate standard Normal distribution is spherically symmetric, its projection onto the sphere is uniform. Obtaining the longitude is essentially the same calculation as the angle in the Box-Mueller method ( q.v. ), but the projected latitude is new. The projection onto the sphere merely normalizes a triple of coordinates $(x,y,z)$ and at that point $z$ is the projected latitude. Thus, take the Normal variates in groups of three, $X_{3i-2}, X_{3i-1}, X_{3i}$, and compute $$\frac{X_{3i}}{\sqrt{X_{3i-2}^2 + X_{3i-1}^2 + X_{3i}^2}}$$ for $i=1, 2, 3, \ldots$. Because most computing systems represent numbers in binary , uniform number generation usually begins by producing uniformly distributed integers between $0$ and $2^{32}-1$ (or some high power of $2$ related to computer word length) and rescaling them as needed. Such integers are represented internally as strings of $32$ binary digits. We can obtain independent random bits by comparing a Normal variable to its median. Thus, it suffices to break the Normal variables into groups of size equal to the desired number of bits, compare each one to its mean, and assemble the resulting sequences of true/false results into a binary number. Writing $k$ for the number of bits and $H$ for the sign (that is, $H(x)=1$ when $x\gt 0$ and $H(x)=0$ otherwise) we can express the resulting normalized uniform value in $[0, 1)$ with the formula $$\sum_{j=0}^{k-1} H(X_{ki - j})2^{-j-1}.$$ The variates $X_n$ can be drawn from any continuous distribution whose median is $0$ (such as a standard Normal); they are processed in groups of $k$ with each group producing one such pseudo-uniform value. Rejection sampling is a standard, flexible, powerful way to draw random variates from arbitrary distributions. Suppose the target distribution has PDF $f$. A value $Y$ is drawn according to another distribution with PDF $g$. In the rejection step, a uniform value $U$ lying between $0$ and $g(Y)$ is drawn independently of $Y$ and compared to $f(Y)$: if it is smaller, $Y$ is retained but otherwise the process is repeated. This approach seems circular, though: how do we generate a uniform variate with a process that needs a uniform variate to begin with? The answer is that we don't actually need a uniform variate in order to carry out the rejection step. Instead (assuming $g(Y)\ne 0$) we can flip a fair coin to obtain a $0$ or $1$ randomly. This will be interpreted as the first bit in the binary representation of a uniform variate $U$ in the interval $[0,1)$. When the outcome is $0$, that means $0 \le U \lt 1/2$; otherwise, $1/2\le U \lt 1$. Half of the time, this is enough to decide the rejection step: if $f(Y)/g(Y) \ge 1/2$ but the coin is $0$, $Y$ should be accepted; if $f(Y)/g(Y) \lt 1/2$ but the coin is $1$, $Y$ should be rejected; otherwise, we need to flip the coin again in order to obtain the next bit of $U$. Because--no matter what value $f(Y)/g(Y)$ has--there is a $1/2$ chance of stopping after each flip, the expected number of flips is only $1/2(1)+1/4(2)+1/8(3)+\cdots+2^{-n}(n)+\cdots=2$. Rejection sampling can be worthwhile (and efficient) provided the expected number of rejections is small. We can accomplish this by fitting the largest possible rectangle (representing a uniform distribution) beneath a Normal PDF. Using Calculus to optimize the rectangle's area, you will find that its endpoints should lie at $\pm 1$, where its height equals $\exp(-1/2)/\sqrt{2\pi}\approx 0.241971$, making its area a little greater than $0.48$. By using this standard Normal density as $g$ and rejecting all values outside the interval $[-1,1]$ automatically, and otherwise applying the rejection procedure, we will obtain uniform variates in $[-1,1]$ efficiently: In a fraction $2\Phi(-1) \approx 0.317$ of the time, the Normal variate lies beyond $[-1,1]$ and is immediately rejected. ($\Phi$ is the standard Normal CDF.) In the remaining fraction of the time, the binary rejection procedure has to be followed, requiring two more Normal variates on average. The overall procedure requires an average of $1/(2\exp(-1/2)/\sqrt{2\pi}) \approx 2.07$ steps. The expected number of Normal variates needed to produce each uniform result works out to $$\sqrt{2 e \pi}\left(1-2\Phi(-1)\right) \approx 2.82137.$$ Although that is pretty efficient, note that (1) computation of the Normal PDF requires computing an exponential and (2) the value $\Phi(-1)$ must be precomputed once and for all. It's still a little less calculation than the Box-Mueller method ( q.v. ). The order statistics of a uniform distribution have exponential gaps. Since the sum of squares of two Normals (of zero mean) is exponential, we may generate a realization of $n$ independent uniforms by summing the squares of pairs of such Normals, computing the cumulative sum of these, rescaling the results to fall in the interval $[0,1]$, and dropping the last one (which will always equal $1$). This is a pleasing approach because it requires only squaring, summing, and (at the end) a single division. The $n$ values will automatically be in ascending order. If such a sorting is desired, this method is computationally superior to all the others insofar as it avoids the $O(n\log(n))$ cost of a sort. If a sequence of independent uniforms is needed, however, then sorting these $n$ values randomly will do the trick. Since (as seen in the Box-Mueller method, q.v. ) the ratios of each pair of Normals are independent of the sum of squares of each pair, we already have the means to obtain that random permutation: order the cumulative sums by the corresponding ratios. (If $n$ is very large, this process could be carried out in smaller groups of $k$ with little loss of efficiency, since each group needs only $2(k+1)$ Normals to create $k$ uniform values. For fixed $k$, the asymptotic computational cost is therefore $O(n\log(k))$ = $O(n)$, needing $2n(1+1/k)$ Normal variates to generate $n$ uniform values.) To a superb approximation, any Normal variate with a large standard deviation looks uniform over ranges of much smaller values. Upon rolling this distribution into the range $[0,1]$ (by taking only the fractional parts of the values), we thereby obtain a distribution that is uniform for all practical purposes. This is extremely efficient, requiring one of the simplest arithmetic operations of all: simply round each Normal variate down to the nearest integer and retain the excess. The simplicity of this approach becomes compelling when we examine a practical R implementation: rnorm(n, sd=10) %% 1 reliably produces n uniform values in the range $[0,1]$ at the cost of just n Normal variates and almost no computation. (Even when the standard deviation is $1$, the PDF of this approximation varies from a uniform PDF, as shown in the following figure, by less than one part in $10^8$! To detect it reliably would require a sample of $10^{16}$ values--that's already beyond the capability of any standard test of randomness. With a larger standard deviation the non-uniformity is so small it cannot even be calculated. For instance, with an SD of $10$ as shown in the code, the maximum deviation from a uniform PDF is only $10^{-857}$.) In every case Normal variables "with known parameters" can easily be recentered and rescaled into the Standard Normals assumed above. Afterwards, the resulting uniformly distributed values can be recentered and rescaled to cover any desired interval. These require only basic arithmetic operations. The ease of these constructions is evidenced by the following R code, which uses only one or two lines for most of them. Their correctness is witnessed by the resulting near-uniform histograms based on $100,000$ independent values in each case (requiring around 12 seconds for all seven simulations). For reference--in case you are worried about the amount of variation appearing in any of these plots--a histogram of uniform values simulated with R 's uniform random number generator is included at the end. All these simulations were tested for uniformity using a $\chi^2$ test based on $1000$ bins; none could be considered significantly non-uniform (the lowest p-value was $3\%$--for the results generated by R 's actual uniform number generator!). set.seed(17)
n <- 1e5
y <- matrix(rnorm(floor(n/2)*2), nrow=2)
x <- c(atan2(y[2,], y[1,])/(2*pi) + 1/2, exp(-(y[1,]^2+y[2,]^2)/2))
hist(x, main="Box-Mueller")
y <- apply(array(rnorm(4*n), c(2,2,n)), c(3,2), function(z) sum(z^2))
x <- y[,2] / (y[,1]+y[,2])
hist(x, main="Beta")
x <- apply(array(rnorm(8*n), c(4,2,n)), 3, function(y) cor(y[,1], y[,2]))
hist(x, main="Correlation")
n.bits <- 32; x <- (2^-(1:n.bits)) %*% matrix(rnorm(n*n.bits) > 0, n.bits)
hist(x, main="Binary")
y <- matrix(rnorm(n*3), 3)
x <- y[1, ] / sqrt(apply(y, 2, function(x) sum(x^2)))
hist(x, main="Equal area")
accept <- function(p) { # Using random normals, return TRUE with chance `p`
p.bit <- x <- 0
while(p.bit == x) {
p.bit <- p >= 1/2
x <- rnorm(1) >= 0
p <- (2*p) %% 1
}
return(x == 0)
}
y <- rnorm(ceiling(n * sqrt(exp(1)*pi/2))) # This aims to produce `n` uniforms
y <- y[abs(y) < 1]
x <- y[sapply(y, function(x) accept(exp((x^2-1)/2)))]
hist(x, main="Rejection")
y <- matrix(rnorm(2*(n+1))^2, 2)
x <- cumsum(y)[seq(2, 2*(n+1), 2)]
x <- x[-(n+1)] / x[n+1]
x <- x[order(y[2,-(n+1)]/y[1,-(n+1)])]
hist(x, main="Ordered")
x <- rnorm(n) %% 1 # (Use SD of 5 or greater in practice)
hist(x, main="Modular")
x <- runif(n) # Reference distribution
hist(x, main="Uniform") | {
"source": [
"https://stats.stackexchange.com/questions/117689",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/49545/"
]
} |
117,919 | As far as I know, when adopting Stochastic Gradient Descent as learning algorithm,
someone use 'epoch' for full dataset, and 'batch' for data used in a single update step, while another use 'batch' and 'minibatch' respectively, and the others use 'epoch' and 'minibatch'. This brings much confusion while discussing. So what is the correct saying? Or they are just dialects which are all acceptable? | Epoch means one pass over the full training set Batch means that you use all your data to compute the gradient during one iteration. Mini-batch means you only take a subset of all your data during one iteration. | {
"source": [
"https://stats.stackexchange.com/questions/117919",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56984/"
]
} |
117,922 | I have a data set where the response variable Y is a rate between 0 and 1, where the histogram of Y is bimodal. So I feel the linear regression is not suitable.s I have been reading papers about inflated beta regression. My question is, just as in OLS, we check the plot of Y at a given level of X to se if it is normal, do I need to ensure that the conditional plots of Y given X for each of my predictors is also bimodal? Also, I have read that R-square isn't applicable for GLMs such as beta regression, and instead I need to use deviance residuals. If I want to ensure that the model fits well(all variance is explained), should I be looking for a deviance residual plot that is normally distributed? A preliminary run in SAS showed strong hetero-skedasticity in my model's residuals (the raw residuals, i.e., Y - Yhat) ----however, I want to confirm that this is okay, since in non-linear regression the variance depends on the mean (specifically okay for beta regression) ? | Epoch means one pass over the full training set Batch means that you use all your data to compute the gradient during one iteration. Mini-batch means you only take a subset of all your data during one iteration. | {
"source": [
"https://stats.stackexchange.com/questions/117922",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56986/"
]
} |
117,935 | Let $$(X,Y)\sim N\left(\begin{pmatrix}0\\0\end{pmatrix},\begin{pmatrix}1&\rho\\\rho&1\end{pmatrix}\right)$$ and let we are observing $(|X_1|,|Y_1|),\dots,(|X_n|,|Y_n|)$ independently. I wish to estimate $\rho$ given this observation. Is it possible? I worry I can't because what we observe are the absolute value. | Epoch means one pass over the full training set Batch means that you use all your data to compute the gradient during one iteration. Mini-batch means you only take a subset of all your data during one iteration. | {
"source": [
"https://stats.stackexchange.com/questions/117935",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/40104/"
]
} |
118,033 | Has any study been done on what are the best set of colors to use for showing multiple series on the same plot? I've just been using the defaults in matplotlib , and they look a little childish since they're all bright, primary colors. | A common reference for choosing a color palette is the work of Cynthia Brewer on ColorBrewer . The colors were chosen based on perceptual patterns in choropleth maps, but most of the same advice applies to using color in any type of plot to distinguish data patterns. If color is solely to distinguish between the different lines, then a qualitative palette is in order. Often color is not needed in line plots with only a few lines, and different point symbols and/or dash patterns are effective enough. A more common problem with line plots is that if the lines frequently overlap it will be difficult to distinguish different patterns no matter what symbols or color you use. Stephen Kosslyn recommends a general rule of thumb for only having 4 lines in a plot. If you have more consider splitting the lines into a series of small multiple plots. Here is an example showing the recommendation No color needed and the labels are more than sufficient. | {
"source": [
"https://stats.stackexchange.com/questions/118033",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1120/"
]
} |
118,199 | Sparse coding is defined as learning an over-complete set of basis vectors to represent input vectors (<-- why do we want this) . What are the differences between sparse coding and autoencoder? When will we use sparse coding and autoencoder? | Finding the differences can be done by looking at the models. Let's look at sparse coding first. Sparse coding Sparse coding minimizes the objective
$$
\mathcal{L}_{\text{sc}} = \underbrace{||WH - X||_2^2}_{\text{reconstruction term}} + \underbrace{\lambda ||H||_1}_{\text{sparsity term}}
$$
where $W$ is a matrix of bases, H is a matrix of codes and $X$ is a matrix of the data we wish to represent. $\lambda$ implements a trade of between sparsity and reconstruction. Note that if we are given $H$, estimation of $W$ is easy via least squares. In the beginning, we do not have $H$ however. Yet, many algorithms exist that can solve the objective above with respect to $H$. Actually, this is how we do inference: we need to solve an optimisation problem if we want to know the $h$ belonging to an unseen $x$. Auto encoders Auto encoders are a family of unsupervised neural networks. There are quite a lot of them, e.g. deep auto encoders or those having different regularisation tricks attached--e.g. denoising, contractive, sparse. There even exist probabilistic ones, such as generative stochastic networks or the variational auto encoder. Their most abstract form is
$$
D(d(e(x;\theta^r); \theta^d), x)
$$
but we will go along with a much simpler one for now:
$$
\mathcal{L}_{\text{ae}} = ||W\sigma(W^TX) - X||^2
$$
where $\sigma$ is a nonlinear function such as the logistic sigmoid $\sigma(x) = {1 \over 1 + \exp(-x)}$. Similarities Note that $\mathcal{L}_{sc}$ looks almost like $\mathcal{L}_{ae}$ once we set $H = \sigma(W^TX)$. The difference of both is that i) auto encoders do not encourage sparsity in their general form ii) an autoencoder uses a model for finding the codes, while sparse coding does so by means of optimisation. For natural image data, regularized auto encoders and sparse coding tend to yield very similar $W$. However, auto encoders are much more efficient and are easily generalized to much more complicated models. E.g. the decoder can be highly nonlinear, e.g. a deep neural network. Furthermore, one is not tied to the squared loss (on which the estimation of $W$ for $\mathcal{L}_{sc}$ depends.) Also, the different methods of regularisation yield representations with different characteristica. Denoising auto encoders have also been shown to be equivalent to a certain form of RBMs etc. But why? If you want to solve a prediction problem, you will not need auto encoders unless you have only little labeled data and a lot of unlabeled data. Then you will generally be better of to train a deep auto encoder and put a linear SVM on top instead of training a deep neural net. However, they are very powerful models for capturing characteristica of distributions. This is vague, but research turning this into hard statistical facts is currently conducted. Deep latent Gaussian models aka Variational Auto encoders or generative stochastic networks are pretty interesting ways of obtaining auto encoders which provably estimate the underlying data distribution. | {
"source": [
"https://stats.stackexchange.com/questions/118199",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
118,215 | This is my first post on StackExchange, but I have been using it as a resource for quite a while, I will do my best to use the appropriate format and make the appropriate edits. Also, this is a multi-part question. I wasn't sure if I should split the question into several different posts or just one. Since the questions are all from one section in the same text I thought it would be more relevant to post as one question. I am researching habitat use of a large mammal species for a Master's Thesis. The goal of this project is to provide forest managers (who are most likely not statisticians) with a practical framework to assess the quality of habitat on the lands they manage in regard to this species. This animal is relatively elusive, a habitat specialist, and usually located in remote areas. Relatively few studies have been carried out regarding the distribution of the species, especially seasonally. Several animals were fitted with GPS collars for a period of one year. One hundred locations (50 summer and 50 winter) were randomly selected from each animal's GPS collar data. In addition, 50 points were randomly generated within each animal's home range to serve as "available" or "pseudo-absence" locations. The locations from the GPS collars are coded a 1 and the randomly selected available locations are coded as 0. For each location, several habitat variables were sampled in the field (tree diameters, horizontal cover, coarse woody debris, etc) and several were sampled remotely through GIS (elevation, distance to road, ruggedness, etc). The variables are mostly continuous except for 1 categorical variable that has 7 levels. My goal is to use regression modelling to build resource selection functions (RSF) to model the relative probability of use of resource units. I would like to build a seasonal (winter and summer) RSF for the population of animals (design type I) as well as each individual animal (design type III). I am using R to perform the statistical analysis. The primary text I have been using is… "Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. 2013. Applied Logistic Regression. Wiley, Chicester". The majority of the examples in Hosmer et al. use STATA, I have also been using the following 2 texts for reference with R . "Crawley, M. J. 2005. Statistics : an introduction using R. J. Wiley,
Chichester, West Sussex, England." "Plant, R. E. 2012. Spatial Data Analysis in Ecology and Agriculture
Using R. CRC Press, London, GBR." I am currently following the steps in Chapter 4 of Hosmer et al. for the "Purposeful Selection of Covariates" and have a few questions about the process. I have outlined the first few steps in the text below to aid in my questions. Step 1: A univariable analysis of each independent variable (I used a
univariable logistic regression). Any variable whose univariable test
has a p-value of less than 0.25 should be included in the first
multivariable model. Step 2: Fit a multivariable model containing all covariates
identified for inclusion at step 1 and to assess the importance of
each covariate using the p-value of its Wald statistic. Variables
that do not contribute at traditional levels of significance should
be eliminated and a new model fit. The newer, smaller model should be
compared to the old, larger model using the partial likelihood ratio
test. Step 3: Compare the values of the estimated coefficients in the
smaller model to their respective values from the large model. Any
variable whose coefficient has changed markedly in magnitude should
be added back into the model as it is important in the sense of
providing a needed adjustment of the effect of the variables that
remain in the model. Cycle through steps 2 and 3 until it appears that all of the important variables are included in the model and those excluded are clinically and/or statistically unimportant. Hosmer et al. use the " delta-beta-hat-percent "
as a measure of the change in magnitude of the coefficients. They
suggest a significant change as a delta-beta-hat-percent of >20%. Hosmer et al. define the delta-beta-hat-percent as
$\Delta\hat{\beta}\%=100\frac{\hat{\theta}_{1}-\hat{\beta}_{1}}{\hat{\beta}_{1}}$.
Where $\hat{\theta}_{1}$ is the coefficient from the smaller model and $\hat{\beta}_{1}$ is the coefficient from the larger model. Step 4: Add each variable not selected in Step 1 to the model
obtained at the end of step 3, one at a time, and check its
significance either by the Wald statistic p-value or the partial
likelihood ratio test if it is a categorical variable with more than
2 levels. This step is vital for identifying variables that, by
themselves, are not significantly related to the outcome but make an
important contribution in the presence of other variables. We refer
to the model at the end of Step 4 as the preliminary main effects
model . Steps 5-7: I have not progressed to this point so I will leave these
steps out for now, or save them for a different question. My questions: In step 2, what would be appropriate as a traditional level of
significance, a p-value of <0.05 something larger like <.25? In step 2 again, I want to make sure the R code I have been using for the partial likelihood test is correct and I want to make sure I am interpreting the results correctly. Here is what I have been doing… anova(smallmodel,largemodel,test='Chisq') If the p-value is significant (<0.05) I add the variable back to the model, if it is insignificant I proceed with deletion? In step 3, I have a question regarding the delta-beta-hat-percent and when it is appropriate to add an excluded variable back to the model. For example, I exclude one variable from the model and it changes the $\Delta\hat{\beta}\%$ for a different variable by >20%. However, the variable with the >20% change in $\Delta\hat{\beta}\%$ seems to be insignificant and looks as if it will be excluded from the model in the next few cycles of Steps 2 and 3. How can I make a determination if both variables should be included or excluded from the model? Because I am proceeding by excluding 1 variable at a time by deleting the least significant variables first, I am hesitant to exclude a variable out of order. Finally, I want to make sure the code I am using to calculate $\Delta\hat{\beta}\%$ is correct. I have been using the following code. If there is a package that will do this for me or a more simple way of doing it I am open to suggestions. 100*((smallmodel$coef[2]-largemodel$coef[2])/largemodel$coef[2]) | None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model formulation, and the other components of stepwise variable selection you hope to use should likewise be avoided. This has been discussed at length on this site. What gave you the idea in the first place that variables should sometimes be removed from models because they are not "significant"? Don't use $P$-values or changes in $\beta$ to guide any of the model specification. | {
"source": [
"https://stats.stackexchange.com/questions/118215",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/57117/"
]
} |
118,497 | I am trying to understand the Coefficient of Variation . When I try to apply it to the following two samples of data I am unable to understand how to interpret the results. Let's say sample 1 is ${0, 5, 7, 12, 11, 17}$
and sample 2 is ${10 ,15 ,17 ,22 ,21 ,27}$. Here sample 2 $=$ sample 1 $+\ 10$ as you can see. Both have the same standard deviation $\sigma_{2} = \sigma_{1}= 5.95539$ but $\mu_{2}=18.67$ and $\mu_{1}=8.66667$. Now the coefficient of variation ${\sigma}/{\mu}$ will be different. For sample 2 it will be less than for sample 1. But how do I interpret that result? In terms of variance both are the same; only their means are different. So what's the use of the coefficient of variation here? It's just misleading me, or maybe I am unable to interpret the results. | In examples like yours when data differ just additively, i.e. we add some constant $k$ to everything, then as you point out the standard deviation is unchanged, the mean is changed by exactly that constant, and so the coefficient of variation changes from $\sigma / \mu$ to $\sigma / (\mu + k)$ , which is neither interesting nor useful. It's multiplicative change that's interesting and where the coefficient of variation has some use. For multiplying everything by some constant $k$ implies that the coefficient of variation becomes $k \sigma/k \mu$ , i.e. remains the same as before. Changing of units of measurement is a case in point, as in the answers of @Aksalal and @Macond. As the coefficient of variation is unit-free, so also it is dimension-free, as whatever units or dimensions are possessed by the underlying variable are washed out by the division. That makes the coefficient of variation a measure of relative variability , so the relative variability of lengths may be compared with that of weights, and so forth. One field where the coefficient of variation has found some descriptive use is the morphometrics of organism size in biology. In principle and practice the coefficient of variation is only defined fully and at all useful for variables that are entirely positive. Hence in detail your first sample with a value of $0$ is not an appropriate example. Another way of seeing this is to note that were the mean ever zero the coefficient would be indeterminate and were the mean ever negative the coefficient would be negative, assuming in the latter case that the standard deviation is positive. Either case would make the measure useless as a measure of relative variability, or indeed for any other purpose. An equivalent statement is that the coefficient of variation is interesting and useful only if logarithms are defined in the usual way for all values, and indeed using coefficients of variation is equivalent to looking at variability of logarithms. Although it should seem incredible to readers here, I have seen climatological and geographical publications in which the coefficients of variation of Celsius temperatures have puzzled naive scientists who note that coefficients can explode as mean temperatures get close to $0^\circ$ C and become negative for mean temperatures below freezing. Even more bizarrely, I have seen suggestions that the problem is solved by using Fahrenheit instead. Conversely, the coefficient of variation is often mentioned correctly as a summary measure defined if and only if measurement scales qualify as ratio scale. As it happens, the coefficient of variation is not especially useful even for temperatures measured in kelvin, but for physical reasons rather than mathematical or statistical. As in the case of the bizarre examples from climatology, which I leave unreferenced as the authors deserve neither the credit nor the shame, the coefficient of variation has been over-used in some fields. There is occasionally a tendency to regard it as a kind of magic summary measure that encapsulates both mean and standard deviation. This is naturally primitive thinking, as even when the ratio makes sense, the mean and standard deviation cannot be recovered from it. In statistics the coefficient of variation is a fairly natural parameter if variation follows either the gamma or the lognormal, as may be seen by looking at the form of the coefficient of variation for those distributions. Although the coefficient of variation can be of some use, in cases where it applies the more useful step is to work on logarithmic scale, either by logarithmic transformation or by using a logarithmic link function in a generalized linear model. EDIT: If all values are negative, then we can regard the sign as just a convention that can be ignored. Equivalently in that case, $\sigma / |\mu|$ is effectively an identical twin of coefficient of variation. EDIT 25 May 2020: Good detailed discussion in Simpson, G.G., Roe, A. and Lewontin, R.C. 1960. Quantitative Zoology. New York: Harcourt, Brace, pp.89-94. This text is inevitably dated in several respects, but includes many lucid explanations and pugnacious comments and criticisms. See also Lewontin, R.C. 1966. On the measurement of relative variability. Systematic Biology 15: 141–142. https://doi.org/10.2307/sysbio/15.2.141 | {
"source": [
"https://stats.stackexchange.com/questions/118497",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55227/"
]
} |
118,500 | In inference we use the terms undirected graphical models and directed graphical models . Why do we say factor graph instead of factor graphical models ? | In examples like yours when data differ just additively, i.e. we add some constant $k$ to everything, then as you point out the standard deviation is unchanged, the mean is changed by exactly that constant, and so the coefficient of variation changes from $\sigma / \mu$ to $\sigma / (\mu + k)$ , which is neither interesting nor useful. It's multiplicative change that's interesting and where the coefficient of variation has some use. For multiplying everything by some constant $k$ implies that the coefficient of variation becomes $k \sigma/k \mu$ , i.e. remains the same as before. Changing of units of measurement is a case in point, as in the answers of @Aksalal and @Macond. As the coefficient of variation is unit-free, so also it is dimension-free, as whatever units or dimensions are possessed by the underlying variable are washed out by the division. That makes the coefficient of variation a measure of relative variability , so the relative variability of lengths may be compared with that of weights, and so forth. One field where the coefficient of variation has found some descriptive use is the morphometrics of organism size in biology. In principle and practice the coefficient of variation is only defined fully and at all useful for variables that are entirely positive. Hence in detail your first sample with a value of $0$ is not an appropriate example. Another way of seeing this is to note that were the mean ever zero the coefficient would be indeterminate and were the mean ever negative the coefficient would be negative, assuming in the latter case that the standard deviation is positive. Either case would make the measure useless as a measure of relative variability, or indeed for any other purpose. An equivalent statement is that the coefficient of variation is interesting and useful only if logarithms are defined in the usual way for all values, and indeed using coefficients of variation is equivalent to looking at variability of logarithms. Although it should seem incredible to readers here, I have seen climatological and geographical publications in which the coefficients of variation of Celsius temperatures have puzzled naive scientists who note that coefficients can explode as mean temperatures get close to $0^\circ$ C and become negative for mean temperatures below freezing. Even more bizarrely, I have seen suggestions that the problem is solved by using Fahrenheit instead. Conversely, the coefficient of variation is often mentioned correctly as a summary measure defined if and only if measurement scales qualify as ratio scale. As it happens, the coefficient of variation is not especially useful even for temperatures measured in kelvin, but for physical reasons rather than mathematical or statistical. As in the case of the bizarre examples from climatology, which I leave unreferenced as the authors deserve neither the credit nor the shame, the coefficient of variation has been over-used in some fields. There is occasionally a tendency to regard it as a kind of magic summary measure that encapsulates both mean and standard deviation. This is naturally primitive thinking, as even when the ratio makes sense, the mean and standard deviation cannot be recovered from it. In statistics the coefficient of variation is a fairly natural parameter if variation follows either the gamma or the lognormal, as may be seen by looking at the form of the coefficient of variation for those distributions. Although the coefficient of variation can be of some use, in cases where it applies the more useful step is to work on logarithmic scale, either by logarithmic transformation or by using a logarithmic link function in a generalized linear model. EDIT: If all values are negative, then we can regard the sign as just a convention that can be ignored. Equivalently in that case, $\sigma / |\mu|$ is effectively an identical twin of coefficient of variation. EDIT 25 May 2020: Good detailed discussion in Simpson, G.G., Roe, A. and Lewontin, R.C. 1960. Quantitative Zoology. New York: Harcourt, Brace, pp.89-94. This text is inevitably dated in several respects, but includes many lucid explanations and pugnacious comments and criticisms. See also Lewontin, R.C. 1966. On the measurement of relative variability. Systematic Biology 15: 141–142. https://doi.org/10.2307/sysbio/15.2.141 | {
"source": [
"https://stats.stackexchange.com/questions/118500",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12359/"
]
} |
118,712 | I understand that the ridge regression estimate is the $\beta$ that minimizes residual sum of square and a penalty on the size of $\beta$ $$\beta_\mathrm{ridge} = (\lambda I_D + X'X)^{-1}X'y = \operatorname{argmin}\big[ \text{RSS} + \lambda \|\beta\|^2_2\big]$$ However, I don't fully understand the significance of the fact that $\beta_\text{ridge}$ differs from $\beta_\text{OLS}$ by only adding a small constant to the diagonal of $X'X$. Indeed, $$\beta_\text{OLS} = (X'X)^{-1}X'y$$ My book mentions that this makes the estimate more stable numerically -- why? Is numerical stability related to the shrinkage towards 0 of the ridge estimate, or it's just a coincidence? | In an unpenalized regression, you can often get a ridge* in parameter space, where many different values along the ridge all do as well or nearly as well on the least squares criterion. * (at least, it's a ridge in the likelihood function -- they're actually valleys $ in the RSS criterion, but I'll continue to call it a ridge, as this seems to be conventional -- or even, as Alexis points out in comments, I could call that a thalweg , being the valley's counterpart of a ridge) In the presence of a ridge in the least squares criterion in parameter space, the penalty you get with ridge regression gets rid of those ridges by pushing the criterion up as the parameters head away from the origin: [ Clearer image ] In the first plot, a large change in parameter values (along the ridge) produces a miniscule change in the RSS criterion. This can cause numerical instability; it's very sensitive to small changes (e.g. a tiny change in a data value, even truncation or rounding error). The parameter estimates are almost perfectly correlated. You may get parameter estimates that are very large in magnitude. By contrast, by lifting up the thing that ridge regression minimizes (by adding the $L_2$ penalty) when the parameters are far from 0, small changes in conditions (such as a little rounding or truncation error) can't produce gigantic changes in the resulting estimates. The penalty term results in shrinkage toward 0 (resulting in some bias). A small amount of bias can buy a substantial improvement in the variance (by eliminating that ridge). The uncertainty of the estimates are reduced (the standard errors are inversely related to the second derivative, which is made larger by the penalty). Correlation in parameter estimates is reduced. You now won't get parameter estimates that are very large in magnitude if the RSS for small parameters would not be much worse. | {
"source": [
"https://stats.stackexchange.com/questions/118712",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20148/"
]
} |
120,030 | I understand the concept that $\hat\beta_0$ is the mean for when the categorical variable is equal to 0 (or is the reference group), giving the end interpretation that the regression coefficient is the difference in mean of the two categories. Even with >2 categories I would assume each $\hat\beta$ explains the difference between that category's mean and the reference. But, what if more variables are brought into the multivariable model? Now what does the intercept mean given that it doesn't make sense for it to be the mean for the reference of two categorical variables? An example would be if gender (M(ref)/F) and race (white(ref)/black) were both in a model. Is the $\hat\beta_0$ the mean for only white males? How does one interpret any other possibilities? As a separate note: do contrast statements serve as a way to method for investigating effect modification? Or just to see the effect ($\hat\beta$) at different levels? | You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the intercept ( $\hat\beta_0$ ) is the mean of the group that constitutes the reference level for both (all) categorical variables. Using your example scenario, consider the case where there is no interaction, then the betas are: $\hat\beta_0$ : the mean of white males $\hat\beta_{\rm Female}$ : the difference between the mean of females and the mean of males $\hat\beta_{\rm Black}$ : the difference between the mean of blacks and the mean of whites We can also think of this in terms of how to calculate the various group means: \begin{align}
&\bar x_{\rm White\ Males}& &= \hat\beta_0 \\
&\bar x_{\rm White\ Females}& &= \hat\beta_0 + \hat\beta_{\rm Female} \\
&\bar x_{\rm Black\ Males}& &= \hat\beta_0 + \hat\beta_{\rm Black} \\
&\bar x_{\rm Black\ Females}& &= \hat\beta_0 + \hat\beta_{\rm Female} + \hat\beta_{\rm Black}
\end{align} If you had an interaction term, it would be added at the end of the equation for black females. (The interpretation of such an interaction term is quite convoluted, but I walk through it here: Interpretation of interaction term .) Update : To clarify my points, let's consider a canned example, coded in R . d = data.frame(Sex =factor(rep(c("Male","Female"),times=2), levels=c("Male","Female")),
Race =factor(rep(c("White","Black"),each=2), levels=c("White","Black")),
y =c(1, 3, 5, 7))
d
# Sex Race y
# 1 Male White 1
# 2 Female White 3
# 3 Male Black 5
# 4 Female Black 7 The means of y for these categorical variables are: aggregate(y~Sex, d, mean)
# Sex y
# 1 Male 3
# 2 Female 5
## i.e., the difference is 2
aggregate(y~Race, d, mean)
# Race y
# 1 White 2
# 2 Black 6
## i.e., the difference is 4 We can compare the differences between these means to the coefficients from a fitted model: summary(lm(y~Sex+Race, d))
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 1 3.85e-16 2.60e+15 2.4e-16 ***
# SexFemale 2 4.44e-16 4.50e+15 < 2e-16 ***
# RaceBlack 4 4.44e-16 9.01e+15 < 2e-16 ***
# ...
# Warning message:
# In summary.lm(lm(y ~ Sex + Race, d)) :
# essentially perfect fit: summary may be unreliable The thing to recognize about this situation is that, without an interaction term, we are assuming parallel lines. Thus, the Estimate for the (Intercept) is the mean of white males. The Estimate for SexFemale is the difference between the mean of females and the mean of males. The Estimate for RaceBlack is the difference between the mean of blacks and the mean of whites. Again, because a model without an interaction term assumes that the effects are strictly additive (the lines are strictly parallel), the mean of black females is then the mean of white males plus the difference between the mean of females and the mean of males plus the difference between the mean of blacks and the mean of whites. | {
"source": [
"https://stats.stackexchange.com/questions/120030",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/58623/"
]
} |
120,080 | Both PCA and autoencoder can do demension reduction, so what are the difference between them? In what situation I should use one over another? | PCA is restricted to a linear map, while auto encoders can have nonlinear enoder/decoders. A single layer auto encoder with linear transfer function is nearly equivalent to PCA, where nearly means that the $W$ found by AE and PCA won't necessarily be the same - but the subspace spanned by the respective $W$ 's will. | {
"source": [
"https://stats.stackexchange.com/questions/120080",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
120,089 | I have seen a term "permutation invariant" version of the MNIST digit recognition task. What does it mean? | In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance would be the same. This is not the case for convolutional networks, which assume neighbourhood relations. | {
"source": [
"https://stats.stackexchange.com/questions/120089",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
120,097 | I am analyzing a multiply-imputed complex sample survey data using Stata. For normally distributed numerical variables I want to report the mean and standard deviation. However, the Stata command for estimating mean of a multiply-imputed survey data mi estimate: svy: mean [varlist] does give the standard error of the mean, not the standard deviation. I tried to search for valuable help using Google but in vain. My question is this: Under such circumstances, is it possible to obtain an unbiased estimate of the standard deviation using the formula $\sigma$ $=$ $SE$ $\Huge.$ $\sqrt{n}$? | In this context this refers to the fact that the model does not assume any spatial relationships between the features. E.g. for multilayer perceptron, you can permute the pixels and the performance would be the same. This is not the case for convolutional networks, which assume neighbourhood relations. | {
"source": [
"https://stats.stackexchange.com/questions/120097",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/52898/"
]
} |
120,179 | Given a covariance matrix $\boldsymbol \Sigma_s$, how to generate data such that it would have the sample covariance matrix $\hat{\boldsymbol \Sigma} = \boldsymbol \Sigma_s$? More generally: we are often interested in generating data from a density $ f(x \vert \boldsymbol\theta) $, with data $x$ given some parameter vector $\boldsymbol\theta$. This results in a sample, from which we may then again estimate a value $\boldsymbol{\hat\theta}$. What I am interested in is the reverse problem: What if we are given a set of parameters $\boldsymbol\theta_{s}$, and we would like to generate a sample $x$ such, that $ \boldsymbol{\hat\theta} = \boldsymbol\theta_{s}$. Is this a known problem? Is such a method useful? Are algorithms available? | There are two different typical situations for these kind of problems: i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to sampling variation, you don't have the sample characteristics exactly matching). ii) you want to generate a sample whose sample characteristics match the ones specified (but, due to the constraints of exactly matching sample quantities to a prespecified set of values, don't really come from the distribution you want). You want the second case -- but you get it by following the same approach as the first case, with an extra standardization step. So for multivariate normals, either can be done in a fairly straightforward manner: With first case you could use random normals without the population structure (such as iid standard normal which have expectation 0 and identity covariance matrix) and then impose it - transform to get the covariance matrix and mean you want. If $\mu$ and $\Sigma$ are the population mean and covariance you need and $z$ are iid standard normal, you calculate $y=Lz+\mu$, for some $L$ where $LL'=\Sigma$ (e.g. a suitable $L$ could be obtained via Cholesky decomposition). Then $y$ has the desired population characteristics. With the second, you have to first transform your random normals to remove even the random variation away from the zero mean and identity covariance (making the sample mean zero and sample covariance $I_n$), then proceed as before. But that initial step of removing the sample deviation from exact mean $0$, variance $I$ interferes with the distribution. (In small samples it can be quite severe.) This can be done by subtracting the sample mean of $z$ ($z^*=z-\bar z$) and calculating the Cholesky decomposition of $z^*$. If $L^*$ is the left Cholesky factor, then $z^{(0)}=(L^*)^{-1}z^*$ should have sample mean 0 and identity sample covariance. You can then calculate $y=Lz^{(0)}+\mu$ and have a sample with the desired sample moments. (Depending on how your sample quantities are defined, there may be an extra small fiddle involved with multiplying/dividing by factors like $\sqrt{\frac{n-1}{n}}$, but it's easy enough to identify that need.) | {
"source": [
"https://stats.stackexchange.com/questions/120179",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55410/"
]
} |
120,527 | I was attempting to simulate injection of random points within a circle, such that any part of the circle has the same probability of having a defect. I expected the count per area of the resulting distribution to follow a Poisson distribution if I break up the circle into equal area rectangles. Since it requires only placing points within a circular area, I injected two uniform random distributions in polar coordinates: $R$ (radius) and $\theta$ (polar angle). But after doing this injection, I clearly get more points in the center of the circle compared to the edge. What would be the correct way to perform this injection across the circle such that the points are randomly distributed across the cirlce? | You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take their square roots; scale the results as desired. Combine that with a uniform polar angle. This is quick and simple to code, efficient in execution (especially on a parallel platform), and generates exactly the prescribed number of points. Example This is working R code to illustrate the algorithm. n <- 1e4
rho <- sqrt(runif(n))
theta <- runif(n, 0, 2*pi)
x <- rho * cos(theta)
y <- rho * sin(theta)
plot(x, y, pch=19, cex=0.6, col="#00000020") | {
"source": [
"https://stats.stackexchange.com/questions/120527",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56528/"
]
} |
120,548 | I have a treatment and a control group: both have pre- and post- values (I tested reading speed). I need to know if there are any significant differences between the groups (or if the treatment group has improved compared with the control group). Now I want to apply the Mann-Whitney test but I am not sure if Iy have to analyse dependent variable: pre-test values
Group variable: treatment + control group variable
dependent variable: post-test values
Group variable: treatment + control group variable or if I just can calculate the differences dependent variable: post-test-pre-test values
Group variable: treatment + control group variable Could anyone give me advice here? | You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take their square roots; scale the results as desired. Combine that with a uniform polar angle. This is quick and simple to code, efficient in execution (especially on a parallel platform), and generates exactly the prescribed number of points. Example This is working R code to illustrate the algorithm. n <- 1e4
rho <- sqrt(runif(n))
theta <- runif(n, 0, 2*pi)
x <- rho * cos(theta)
y <- rho * sin(theta)
plot(x, y, pch=19, cex=0.6, col="#00000020") | {
"source": [
"https://stats.stackexchange.com/questions/120548",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55532/"
]
} |
120,576 | I have a distribution over the finite set $\mathcal{A}$ where the probability mass function $p$ is: $$p(a) = \mathbb{P}(A=a) \quad \quad \quad \quad \quad \quad \text{for all } a \in \mathcal{A}.$$ Given observed data $\mathbf{a} = (a_1,...,a_n)$ the empirical mass function $q_\mathbf{a}$ is defined as: $$q_\mathbf{a}(a) = \frac{1}{n} \sum_{i=1}^n \mathbb{I}(a_i = a) \quad \quad \quad \text{for all } a \in \mathcal{A}.$$ Now, for a random sample $\mathbf{A} = (A_1,...,A_n) \sim \text{IID } p$ , I want to bound from above and below the expectation of the rectilinear distance between the true mass function and the empirical mass function, denoted here by: $$\phi_n \equiv \mathbb{E} \Big[ \|p-q_\mathbf{A}\|_1 \Big].$$ I would think that this is something well known, but I just can't seem to find a good reference. I tried using the DKW inequality and then applying Markov's inequality, but was unable to get anything from that. I also tried using Pinsker's inequality, but I couldn't bound the KL divergence. | You want the proportion of points to be uniformly proportional to area rather than distance to the origin. Since area is proportional to the squared distance, generate uniform random areas and take their square roots; scale the results as desired. Combine that with a uniform polar angle. This is quick and simple to code, efficient in execution (especially on a parallel platform), and generates exactly the prescribed number of points. Example This is working R code to illustrate the algorithm. n <- 1e4
rho <- sqrt(runif(n))
theta <- runif(n, 0, 2*pi)
x <- rho * cos(theta)
y <- rho * sin(theta)
plot(x, y, pch=19, cex=0.6, col="#00000020") | {
"source": [
"https://stats.stackexchange.com/questions/120576",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56189/"
]
} |
120,776 | In this blog post by Andrew Gelman, there is the following passage: The Bayesian models of 50 years ago seem hopelessly simple (except, of
course, for simple problems), and I expect the Bayesian models of
today will seem hopelessly simple, 50 years hence. (Just for a simple
example: we should probably be routinely using t instead of normal
errors just about everwhere, but we don’t yet do so, out of
familiarity, habit, and mathematical convenience. These may be good
reasons–in science as in politics, conservatism has many good
arguments in its favor–but I think that ultimately as we become
comfortable with more complicated models, we’ll move in that
direction.) Why should we "routinely be using t instead of normal errors just about everywhere"? | Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more. When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators). If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller. I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model. (*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition). | {
"source": [
"https://stats.stackexchange.com/questions/120776",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27102/"
]
} |
120,792 | I am working on model averaging of data collected about bird species and habitat vegetation. I have been using the MuMIn package in R and have taken a subset of all possible models and then averaged the variables in those models to create the coefficients from the subset of models but now I need to find the pseudo r-squared of this averaged model. Does anyone know how to accomplish that? | Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more. When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators). If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller. I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model. (*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition). | {
"source": [
"https://stats.stackexchange.com/questions/120792",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/58963/"
]
} |
120,803 | If I understand things correctly, we want to look at $log(p/(1-p))$ since we want the independent variables to be able to take on any real value. But what if you know your independents can never be negative? Wouldn't it make more sense to simply look at $p/(1-p)$ in that case? (if so, how would the syntax for such a model look in R?) | Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more. When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators). If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller. I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model. (*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition). | {
"source": [
"https://stats.stackexchange.com/questions/120803",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/43059/"
]
} |
120,815 | I am developing an application related to pharmaceutical industry. Certain items are sold in significantly higher quantities during specific periods of the year. For example, here in my country, Mebendazole is more in demand during the school vacations. I am in the process of automatically predicting the reorder quantities of items for a pharmaceutical logistics application. I want to analyse the past data, for example, during the last five years and to know whether there is a significant increase or decrease in the demand during, for example, the next two weeks than the average. If it is significant, I want to calculate the percentage above or below the current years average to decide on reorder quantity. What are the statistical methods available to achieve these two tasks? Thanks in advance | Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more. When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators). If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller. I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model. (*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition). | {
"source": [
"https://stats.stackexchange.com/questions/120815",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/58975/"
]
} |
120,826 | I have a multivariate regression model $Y=X\beta ' + \epsilon$. The variables in the $X$ matrix have very different scales and hence the condition number of $X'X$ is huge (order of trillions). I would like to know if there are problems with parameter estimation due to the high condition number. On one hand, I suspect that if the number is high, the estimates of the $\beta$ are very unstable (because a small change in $X$ could have a large impact on the solution of $X'X\hat{\beta}=X'Y$). On the other hand, I do not think the stability of the solution shall change if I just change the units of the data matrix $X$, because the new estimates should just be multiples of the previous estimates. Could someone provide advice? Thanks. | Because, assuming normal errors is effectively the same as assuming that large errors do not occur! The normal distribution has so light tails, that errors outside $\pm 3$ standard deviations have very low probability, errors outside of $\pm 6$ standard deviations are effectively impossible. In practice, that assumption is seldom true. When analyzing small, tidy datasets from well designed experiments, this might not matter much, if we do a good analysis of residuals. With data of lesser quality, it might matter much more. When using likelihood-based (or bayesian) methods, the effect of this normality (as said above, effectively this is the "no large errors"-assumption!) is to make the inference very little robust. The results of the analysis are too heavily influenced by the large errors! This must be so, since assuming "no large errors" forces our methods to interpret the large errors as small errors, and that can only happen by moving the mean value parameter to make all the errors smaller. One way to avoid that is to use so-called "robust methods", see http://web.archive.org/web/20160611192739/http://www.stats.ox.ac.uk/pub/StatMeth/Robust.pdf But Andrew Gelman will not go for this, since robust methods are usually presented in a highly non-bayesian way. Using t-distributed errors in likelihood/bayesian models is a different way to obtain robust methods, as the $t$-distribution has heavier tails than the normal, so allows for a larger proportion of large errors. The number of degrees of freedom parameter should be fixed in advance, not estimated from the data, since such estimation will destroy the robustness properties of the method (*) (it is also a very difficult problem, the likelihood function for $\nu$, the number degrees of freedom, can be unbounded, leading to very inefficient (even inconsistent) estimators). If, for instance, you think (are afraid) that as much as 1 in ten observations might be "large errors" (above 3 sd), then you could use a $t$-distribution with 2 degrees of freedom, increasing that number if the proportion of large errors is believed to be smaller. I should note that what I have said above is for models with independent $t$-distributed errors. There have also been proposals of multivariate $t$-distribution (which is not independent) as error distribution. That propsal is heavily criticized in the paper "The emperor's new clothes: a critique of the multivariate $t$ regression model" by T. S. Breusch, J. C. Robertson and A. H. Welsh, in Statistica Neerlandica (1997) Vol. 51, nr. 3, pp. 269-286, where they show that the multivariate $t$ error distribution is empirically indistinguishable from the normal. But that criticism do not affect the independent $t$ model. (*) One reference stating this is Venables & Ripley's MASS---Modern Applied Statistics with S (on page 110 in 4th edition). | {
"source": [
"https://stats.stackexchange.com/questions/120826",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/49819/"
]
} |
121,162 | I know how to calculate PCA and SVD mathematically, and I know that both can be applied to Linear Least Squares regression. The main advantage of SVD mathematically seems to be that it can be applied to non-square matrices. Both focus on the decomposition of the $X^\top X$ matrix. Other than the advantage of SVD mentioned, are there any additional advantages or insights provided by using SVD over PCA? I'm really looking for the intuition rather than any mathematical differences. | As @ttnphns and @nick-cox said, SVD is a numerical method and PCA is an analysis approach (like least squares). You can do PCA using SVD, or you can do PCA doing the eigen-decomposition of $X^T X$ (or $X X^T$), or you can do PCA using many other methods, just like you can solve least squares with a dozen different algorithms like Newton's method or gradient descent or SVD etc. So there is no "advantage" to SVD over PCA because it's like asking whether Newton's method is better than least squares: the two aren't comparable. | {
"source": [
"https://stats.stackexchange.com/questions/121162",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25973/"
]
} |
121,490 | Can anyone tell me how to interpret the 'residuals vs fitted', 'normal q-q', 'scale-location', and 'residuals vs leverage' plots? I am fitting a binomial GLM, saving it and then plotting it. | R does not have a distinct plot.glm() method. When you fit a model with glm() and run plot() , it calls ?plot.lm , which is appropriate for linear models (i.e., with a normally distributed error term). In general, the meaning of these plots (at least for linear models) can be learned in various existing threads on CV (e.g.: Residuals vs. Fitted ; qq-plots in several places: 1 , 2 , 3 ; Scale-Location ; Residuals vs Leverage ). However, those interpretations are not generally valid when the model in question is a logistic regression. More specifically, the plots will often 'look funny' and lead people to believe that there is something wrong with the model when it is perfectly fine. We can see this by looking at those plots with a couple of simple simulations where we know the model is correct: # we'll need this function to generate the Y data:
lo2p = function(lo){ exp(lo)/(1+exp(lo)) }
set.seed(10) # this makes the simulation exactly reproducible
x = runif(20, min=0, max=10) # the X data are uniformly distributed from 0 to 10
lo = -3 + .7*x # this is the true data generating process
p = lo2p(lo) # here I convert the log odds to probabilities
y = rbinom(20, size=1, prob=p) # this generates the Y data
mod = glm(y~x, family=binomial) # here I fit the model
summary(mod) # the model captures the DGP very well & has no
# ... # obvious problems:
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -1.76225 -0.85236 -0.05011 0.83786 1.59393
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -2.7370 1.4062 -1.946 0.0516 .
# x 0.6799 0.3261 2.085 0.0371 *
# ...
#
# Null deviance: 27.726 on 19 degrees of freedom
# Residual deviance: 21.236 on 18 degrees of freedom
# AIC: 25.236
#
# Number of Fisher Scoring iterations: 4 Now lets look at the plots we get from plot.lm() : Both the Residuals vs Fitted and the Scale-Location plots look like there are problems with the model, but we know there aren't any. These plots, intended for linear models, are simply often misleading when used with a logistic regression model. Let's look at another example: set.seed(10)
x2 = rep(c(1:4), each=40) # X is a factor with 4 levels
lo = -3 + .7*x2
p = lo2p(lo)
y = rbinom(160, size=1, prob=p)
mod = glm(y~as.factor(x2), family=binomial)
summary(mod) # again, everything looks good:
# ...
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -1.0108 -0.8446 -0.3949 -0.2250 2.7162
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -3.664 1.013 -3.618 0.000297 ***
# as.factor(x2)2 1.151 1.177 0.978 0.328125
# as.factor(x2)3 2.816 1.070 2.632 0.008481 **
# as.factor(x2)4 3.258 1.063 3.065 0.002175 **
# ...
#
# Null deviance: 160.13 on 159 degrees of freedom
# Residual deviance: 133.37 on 156 degrees of freedom
# AIC: 141.37
#
# Number of Fisher Scoring iterations: 6 Now all the plots look strange. So what do these plots show you? The Residuals vs Fitted plot can help you see, for example, if there are curvilinear trends that you missed. But the fit of a logistic regression is curvilinear by nature, so you can have odd looking trends in the residuals with nothing amiss. The Normal Q-Q plot helps you detect if your residuals are normally distributed. But the deviance residuals don't have to be normally distributed for the model to be valid, so the normality / non-normality of the residuals doesn't necessarily tell you anything. The Scale-Location plot can help you identify heteroscedasticity. But logistic regression models are pretty much heteroscedastic by nature. The Residuals vs Leverage can help you identify possible outliers. But outliers in logistic regression don't necessarily manifest in the same way as in linear regression, so this plot may or may not be helpful in identifying them. The simple take home lesson here is that these plots can be very hard to use to help you understand what is going on with your logistic regression model. It is probably best for people not to look at these plots at all when running logistic regression, unless they have considerable expertise. | {
"source": [
"https://stats.stackexchange.com/questions/121490",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59361/"
]
} |
121,852 | Certain hypotheses can be tested using Student's t -test (maybe using Welch's correction for unequal variances in the two-sample case), or by a non-parametric test like the Wilcoxon paired signed rank test, the Wilcoxon-Mann-Whitney U test, or the paired sign test. How can we make a principled decision about which test is most appropriate, particularly if the sample size is "small"? Many introductory textbooks and lecture notes give a "flowchart" approach where normality is checked (either – inadvisedly – by normality test, or more broadly by QQ plot or similar) to decide between a t -test or non-parametric test. For the unpaired two-sample t -test there may be a further check for homogeneity of variance to decide whether to apply Welch's correction. One issue with this approach is the way the decision on which test to apply depends on the observed data, and how this affects the performance (power, Type I error rate) of the selected test. Another problem is how hard checking normality is in small data sets: formal testing has low power so violations may well not be detected, but similar issues apply eyeballing the data on a QQ plot. Even egregious violations could go undetected, e.g. if the distribution is mixed but no observations were drawn from one component of the mixture. Unlike for large $n$, we can't lean on the safety-net of the Central Limit Theorem, and the asymptotic normality of the test statistic and t distribution. One principled response to this is "safety first": with no way to reliably verify the normality assumption in a small sample, stick to non-parametric methods. Another is to consider any grounds for assuming normality, theoretically (e.g. variable is sum of several random components and CLT applies) or empirically (e.g. previous studies with larger $n$ suggest variable is normal), and using a t -test only if such grounds exist. But this usually only justifies approximate normality, and on low degrees of freedom it's hard to judge how near normal it needs to be to avoid invalidating a t -test. Most guides to choosing a t-test or non-parametric test focus on the normality issue. But small samples also throw up some side-issues: If performing an "unrelated samples" or "unpaired" t-test, whether to use a Welch correction ? Some people use a hypothesis test for equality of variances, but here it would have low power; others check whether SDs are "reasonably" close or not (by various criteria). Is it safer simply to always use the Welch correction for small samples, unless there is some good reason to believe population variances are equal? If you see the choice of methods as a trade-off between power and robustness, claims about the asymptotic efficiency of the non-parametric methods are unhelpful . The rule of thumb that " Wilcoxon tests have about 95% of the power of a t-test if the data really are normal , and are often far more powerful if the data is not, so just use a Wilcoxon" is sometimes heard, but if the 95% only applies to large $n$, this is flawed reasoning for smaller samples. Small samples may make it very difficult, or impossible, to assess whether a transformation is appropriate for the data since it's hard to tell whether the transformed data belong to a (sufficiently) normal distribution. So if a QQ plot reveals very positively skewed data, which look more reasonable after taking logs, is it safe to use a t-test on the logged data? On larger samples this would be very tempting, but with small $n$ I'd probably hold off unless there had been grounds to expect a log-normal distribution in the first place. What about checking assumptions for the non-parametrics? Some sources recommend verifying a symmetric distribution before applying a Wilcoxon test (treating it as a test for location rather than stochastic dominance), which brings up similar problems to checking normality. If the reason we are applying a non-parametric test in the first place is a blind obedience to the mantra of "safety first", then the difficulty assessing skewness from a small sample would apparently lead us to the lower power of a paired sign test. With these small-sample issues in mind, is there a good - hopefully citable - procedure to work through when deciding between t and non-parametric tests? There have been several excellent answers, but a response considering other alternatives to rank tests, such as permutation tests, would also be welcome. | I am going to change the order of questions about. I've found textbooks and lecture notes frequently disagree, and would like a system to work through the choice that can safely be recommended as best practice, and especially a textbook or paper this can be cited to. Unfortunately, some discussions of this issue in books and so on rely on received wisdom. Sometimes that received wisdom is reasonable, sometimes it is less so (at the least in the sense that it tends to focus on a smaller issue when a larger problem is ignored); we should examine the justifications offered for the advice (if any justification is offered at all) with care. Most guides to choosing a t-test or non-parametric test focus on the normality issue. That’s true, but it’s somewhat misguided for several reasons that I address in this answer. If performing an "unrelated samples" or "unpaired" t-test, whether to use a Welch correction? This (to use it unless you have reason to think variances should be equal) is the advice of numerous references. I point to some in this answer. Some people use a hypothesis test for equality of variances, but here it would have low power. Generally I just eyeball whether the sample SDs are "reasonably" close or not (which is somewhat subjective, so there must be a more principled way of doing it) but again, with low n it may well be that the population SDs are rather further apart than the sample ones. Is it safer simply to always use the Welch correction for small samples, unless there is some good reason to believe population variances are equal?
That’s what the advice is. The properties of the tests are affected by the choice based on the assumption test. Some references on this can be seen here and here , though there are more that say similar things. The equal-variances issue has many similar characteristics to the normality issue – people want to test it, advice suggests conditioning choice of tests on the results of tests can adversely affect the results of both kinds of subsequent test – it’s better simply not to assume what you can’t adequately justify (by reasoning about the data, using information from other studies relating to the same variables and so on). However, there are differences. One is that – at least in terms of the distribution of the test statistic under the null hypothesis (and hence, its level-robustness) - non-normality is less important in large samples (at least in respect of significance level, though power might still be an issue if you need to find small effects), while the effect of unequal variances under the equal variance assumption doesn’t really go away with large sample size. What principled method can be recommended for choosing which is the most appropriate test when the sample size is "small"? With hypothesis tests, what matters (under some set of conditions) is primarily two things: What is the actual type I error rate? What is the power behaviour like? We also need to keep in mind that if we're comparing two procedures, changing the first will change the second (that is, if they’re not conducted at the same actual significance level, you would expect that higher $\alpha$ is associated with higher power). (Of course we're usually not so confident we know what distributions we're dealing with, so the sensitivity of those behaviors to changes in circumstances also matter.) With these small-sample issues in mind, is there a good - hopefully citable - checklist to work through when deciding between t and non-parametric tests? I will consider a number of situations in which I’ll make some recommendations, considering both the possibility of non-normality and unequal variances. In every case, take mention of the t-test to imply the Welch-test: n medium-large Non-normal (or unknown), likely to have near-equal variance: If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney, though if it’s only slightly heavy, the t-test should do okay. With light-tails the t-test may (often) be preferred. Permutation tests are a good option (you can even do a permutation test using a t-statistic if you're so inclined). Bootstrap tests are also suitable. Non-normal (or unknown), unequal variance (or variance relationship unknown): If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney if inequality of variance is only related to inequality of mean - i.e. if H0 is true the difference in spread should also be absent. GLMs are often a good option, especially if there’s skewness and spread is related to the mean. A permutation test is another option, with a similar caveat as for the rank-based tests. Bootstrap tests are a good possibility here. Zimmerman and Zumbo (1993) $^{[1]}$ suggest a Welch-t-test on the ranks which they say performs better that the Wilcoxon-Mann-Whitney in cases where the variances are unequal. n moderately small rank tests are reasonable defaults here if you expect non-normality (again with the above caveat). If you have external information about shape or variance, you might consider GLMs . If you expect things not to be too far from normal, t-tests may be fine. n very small Because of the problem with getting suitable significance levels, neither permutation tests nor rank tests may be suitable, and at the smallest sizes, a t-test may be the best option (there’s some possibility of slightly robustifying it). However, there’s a good argument for using higher type I error rates with small samples (otherwise you’re letting type II error rates inflate while holding type I error rates constant).
Also see de Winter (2013) $^{[2]}$ . The advice must be modified somewhat when the distributions are both strongly skewed and very discrete, such as Likert scale items where most of the observations are in one of the end categories. Then the Wilcoxon-Mann-Whitney isn’t necessarily a better choice than the t-test. Simulation can help guide choices further when you have some information about likely circumstances. I appreciate this is something of a perennial topic, but most questions concern the questioner's particular data set, sometimes a more general discussion of power, and occasionally what to do if two tests disagree, but I would like a procedure to pick the correct test in the first place! The main problem is how hard it is to check the normality assumption in a small data set: It is difficult to check normality in a small data set, and to some extent that's an important issue, but I think there's another issue of importance that we need to consider. A basic problem is that trying to assess normality as the basis of choosing between tests adversely impacts the properties of the tests you're choosing between. Any formal test for normality would have low power so violations may well not be detected. (Personally I wouldn't test for this purpose, and I'm clearly not alone, but
I've found this little use when clients demand a normality test be performed because that's what their textbook or old lecture notes or some website they found once declare should be done. This is one point where a weightier looking citation would be welcome.) Here’s an example of a reference (there are others) which is unequivocal (Fay and Proschan, 2010 $^{[3]}$ ): The choice between t- and WMW DRs should not be based on a test of normality. They are similarly unequivocal about not testing for equality of variance. To make matters worse, it is unsafe to use the Central Limit Theorem as a safety net: for small n we can't rely on the convenient asymptotic normality of the test statistic and t distribution. Nor even in large samples -- asymptotic normality of the numerator doesn’t imply that the t-statistic will have a t-distribution. However, that may not matter so much, since you should still have asymptotic normality (e.g. CLT for the numerator, and Slutsky’s theorem suggest that eventually the t-statistic should begin to look normal, if the conditions for both hold.) One principled response to this is "safety first": as there's no way to reliably verify the normality assumption on a small sample, run an equivalent non-parametric test instead. That’s actually the advice that the references I mention (or link to mentions of) give. Another approach I've seen but feel less comfortable with, is to perform a visual check and proceed with a t-test if nothing untowards is observed ("no reason to reject normality", ignoring the low power of this check). My personal inclination is to consider whether there are any grounds for assuming normality, theoretical (e.g. variable is sum of several random components and CLT applies) or empirical (e.g. previous studies with larger n suggest variable is normal). Both those are good arguments, especially when backed up with the fact that the t-test is reasonably robust against moderate deviations from normality.
(One should keep in mind, however, that "moderate deviations" is a tricky phrase; certain kinds of deviations from normality may impact the power performace of the t-test quite a bit even though those deviations are visually very small - the t-test is less robust to some deviations than others. We should keep this in mind whenever we're discussing small deviations from normality.) Beware, however, the phrasing "suggest the variable is normal". Being reasonably consistent with normality is not the same thing as normality. We can often reject actual normality with no need even to see the data – for example, if the data cannot be negative, the distribution cannot be normal. Fortunately, what matters is closer to what we might actually have from previous studies or reasoning about how the data are composed, which is that the deviations from normality should be small. If so, I would use a t-test if data passed visual inspection, and otherwise stick to non-parametrics. But any theoretical or empirical grounds usually only justify assuming approximate normality, and on low degrees of freedom it's hard to judge how near normal it needs to be to avoid invalidating a t-test. Well, that’s something we can assess the impact of fairly readily (such as via simulations, as I mentioned earlier). From what I've seen, skewness seems to matter more than heavy tails (but on the other hand I have seen some claims of the opposite - though I don't know what that's based on). For people who see the choice of methods as a trade-off between power and robustness, claims about the asymptotic efficiency of the non-parametric methods are unhelpful. For instance, the rule of thumb that "Wilcoxon tests have about 95% of the power of a t-test if the data really are normal, and are often far more powerful if the data is not, so just use a Wilcoxon" is sometimes heard, but if the 95% only applies to large n, this is flawed reasoning for smaller samples. But we can check small-sample power quite easily! It’s easy enough to simulate to obtain power curves as here . (Again, also see de Winter (2013) $^{[2]}$ ). Having done such simulations under a variety of circumstances, both for the two-sample and one-sample/paired-difference cases, the small sample efficiency at the normal in both cases seems to be a little lower than the asymptotic efficiency, but the efficiency of the signed rank and Wilcoxon-Mann-Whitney tests is still very high even at very small sample sizes. At least that's if the tests are done at the same actual significance level; you can't do a 5% test with very small samples (and least not without randomized tests for example), but if you're prepared to perhaps do (say) a 5.5% or a 3.2% test instead, then the rank tests hold up very well indeed compared with a t-test at that significance level. Small samples may make it very difficult, or impossible, to assess whether a transformation is appropriate for the data since it's hard to tell whether the transformed data belong to a (sufficiently) normal distribution. So if a QQ plot reveals very positively skewed data, which look more reasonable after taking logs, is it safe to use a t-test on the logged data? On larger samples this would be very tempting, but with small n I'd probably hold off unless there had been grounds to expect a log-normal distribution in the first place. There’s another alternative: make a different parametric assumption. For example, if there’s skewed data, one might, for example, in some situations reasonably consider a gamma distribution, or some other skewed family as a better approximation - in moderately large samples, we might just use a GLM, but in very small samples it may be necessary to look to a small-sample test - in many cases simulation can be useful. Alternative 2: robustify the t-test (but taking care about the choice of robust procedure so as not to heavily discretize the resulting distribution of the test statistic) - this has some advantages over a very-small-sample nonparametric procedure such as the ability to consider tests with low type I error rate. Here I'm thinking along the lines of using say M-estimators of location (and related estimators of scale) in the t-statistic to smoothly robustify against deviations from normality. Something akin to the Welch, like: $$\frac{\stackrel{\sim}{x}-\stackrel{\sim}{y}}{\stackrel{\sim}{S}_p}$$ where $\stackrel{\sim}{S}_p^2=\frac{\stackrel{\sim}{s}_x^2}{n_x}+\frac{\stackrel{\sim}{s}_y^2}{n_y}$ and $\stackrel{\sim}{x}$ , $\stackrel{\sim}{s}_x$ etc being robust estimates of location and scale respectively. I'd aim to reduce any tendency of the statistic to discreteness - so I'd avoid things like trimming and Winsorizing, since if the original data were discrete, trimming etc will exacerbate this; by using M-estimation type approaches with a smooth $\psi$ -function you achieve similar effects without contributing to the discreteness. Keep in mind we're trying to deal with the situation where $n$ is very small indeed (around 3-5, in each sample, say), so even M-estimation potentially has its issues. You could, for example, use simulation at the normal to get p-values (if sample sizes are very small, I'd suggest that over bootstrapping - if sample sizes aren't so small, a carefully-implemented bootstrap may do quite well, but then we might as well go back to Wilcoxon-Mann-Whitney). There's be a scaling factor as well as a d.f. adjustment to get to what I'd imagine would then be a reasonable t-approximation. This means we should get the kind of properties we seek very close to the normal, and should have reasonable robustness in the broad vicinity of the normal. There are a number of issues that come up that would be outside the scope of the present question, but I think in very small samples the benefits should outweigh the costs and the extra effort required. [I haven't read the literature on this stuff for a very long time, so I don't have suitable references to offer on that score.] Of course if you didn't expect the distribution to be somewhat normal-like, but rather similar to some other distribution, you could undertake a suitable robustification of a different parametric test. What if you want to check assumptions for the non-parametrics? Some sources recommend verifying a symmetric distribution before applying a Wilcoxon test, which brings up similar problems to checking normality. Indeed. I assume you mean the signed rank test*. In the case of using it on paired data, if you are prepared to assume that the two distributions are the same shape apart from location shift you are safe, since the differences should then be symmetric. Actually, we don't even need that much; for the test to work you need symmetry under the null; it's not required under the alternative (e.g. consider a paired situation with identically-shaped right skewed continuous distributions on the positive half-line, where the scales differ under the alternative but not under the null; the signed rank test should work essentially as expected in that case). The interpretation of the test is easier if the alternative is a location shift though. *(Wilcoxon’s name is associated with both the one and two sample rank tests – signed rank and rank sum; with their U test, Mann and Whitney generalized the situation studied by Wilcoxon, and introduced important new ideas for evaluating the null distribution, but the priority between the two sets of authors on the Wilcoxon-Mann-Whitney is clearly Wilcoxon’s -- so at least if we only consider Wilcoxon vs Mann&Whitney, Wilcoxon goes first in my book. However, it seems Stigler's Law beats me yet again, and Wilcoxon should perhaps share some of that priority with a number of earlier contributors, and (besides Mann and Whitney) should share credit with several discoverers of an equivalent test.[4][5] ) References [1]: Zimmerman DW and Zumbo BN, (1993), Rank transformations and the power of the Student t-test and Welch t′-test for non-normal populations, Canadian Journal Experimental Psychology, 47 : 523–39. [2]: J.C.F. de Winter (2013), "Using the Student’s t-test with extremely small sample sizes," Practical Assessment, Research and Evaluation , 18 :10, August, ISSN 1531-7714 http://pareonline.net/getvn.asp?v=18&n=10 [3]: Michael P. Fay and Michael A. Proschan (2010), "Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules," Stat Surv ; 4 : 1–39. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2857732/ [4]: Berry, K.J., Mielke, P.W. and Johnston, J.E. (2012), "The Two-sample Rank-sum Test: Early Development," Electronic Journal for History of Probability and Statistics , Vol.8, December pdf [5]: Kruskal, W. H. (1957), "Historical notes on the Wilcoxon unpaired two-sample test," Journal of the American Statistical Association , 52 , 356–360. | {
"source": [
"https://stats.stackexchange.com/questions/121852",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/22228/"
]
} |
121,860 | I am finishing up an econometrics assignment and this problem has me stuck. I have estimated a regression equation for ln hourly wages on a gender dummy variable, several race dummy variables, a quadratic form of experience, and education. ln(hourly wages) = β0 + β1female + β2black + β3american_indian + β4asian + β5other + β6education + β7experience + βexperience^2 I now want to re-estimate my model allowing the relationship between experience and wages to differ by sex. I know I need to include the βexperience*female but do I also need to include βexperience^2*female? I apologize if my notation is a bit off. Any help/explanation is greatly appreciated. | I am going to change the order of questions about. I've found textbooks and lecture notes frequently disagree, and would like a system to work through the choice that can safely be recommended as best practice, and especially a textbook or paper this can be cited to. Unfortunately, some discussions of this issue in books and so on rely on received wisdom. Sometimes that received wisdom is reasonable, sometimes it is less so (at the least in the sense that it tends to focus on a smaller issue when a larger problem is ignored); we should examine the justifications offered for the advice (if any justification is offered at all) with care. Most guides to choosing a t-test or non-parametric test focus on the normality issue. That’s true, but it’s somewhat misguided for several reasons that I address in this answer. If performing an "unrelated samples" or "unpaired" t-test, whether to use a Welch correction? This (to use it unless you have reason to think variances should be equal) is the advice of numerous references. I point to some in this answer. Some people use a hypothesis test for equality of variances, but here it would have low power. Generally I just eyeball whether the sample SDs are "reasonably" close or not (which is somewhat subjective, so there must be a more principled way of doing it) but again, with low n it may well be that the population SDs are rather further apart than the sample ones. Is it safer simply to always use the Welch correction for small samples, unless there is some good reason to believe population variances are equal?
That’s what the advice is. The properties of the tests are affected by the choice based on the assumption test. Some references on this can be seen here and here , though there are more that say similar things. The equal-variances issue has many similar characteristics to the normality issue – people want to test it, advice suggests conditioning choice of tests on the results of tests can adversely affect the results of both kinds of subsequent test – it’s better simply not to assume what you can’t adequately justify (by reasoning about the data, using information from other studies relating to the same variables and so on). However, there are differences. One is that – at least in terms of the distribution of the test statistic under the null hypothesis (and hence, its level-robustness) - non-normality is less important in large samples (at least in respect of significance level, though power might still be an issue if you need to find small effects), while the effect of unequal variances under the equal variance assumption doesn’t really go away with large sample size. What principled method can be recommended for choosing which is the most appropriate test when the sample size is "small"? With hypothesis tests, what matters (under some set of conditions) is primarily two things: What is the actual type I error rate? What is the power behaviour like? We also need to keep in mind that if we're comparing two procedures, changing the first will change the second (that is, if they’re not conducted at the same actual significance level, you would expect that higher $\alpha$ is associated with higher power). (Of course we're usually not so confident we know what distributions we're dealing with, so the sensitivity of those behaviors to changes in circumstances also matter.) With these small-sample issues in mind, is there a good - hopefully citable - checklist to work through when deciding between t and non-parametric tests? I will consider a number of situations in which I’ll make some recommendations, considering both the possibility of non-normality and unequal variances. In every case, take mention of the t-test to imply the Welch-test: n medium-large Non-normal (or unknown), likely to have near-equal variance: If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney, though if it’s only slightly heavy, the t-test should do okay. With light-tails the t-test may (often) be preferred. Permutation tests are a good option (you can even do a permutation test using a t-statistic if you're so inclined). Bootstrap tests are also suitable. Non-normal (or unknown), unequal variance (or variance relationship unknown): If the distribution is heavy-tailed, you will generally be better with a Mann-Whitney if inequality of variance is only related to inequality of mean - i.e. if H0 is true the difference in spread should also be absent. GLMs are often a good option, especially if there’s skewness and spread is related to the mean. A permutation test is another option, with a similar caveat as for the rank-based tests. Bootstrap tests are a good possibility here. Zimmerman and Zumbo (1993) $^{[1]}$ suggest a Welch-t-test on the ranks which they say performs better that the Wilcoxon-Mann-Whitney in cases where the variances are unequal. n moderately small rank tests are reasonable defaults here if you expect non-normality (again with the above caveat). If you have external information about shape or variance, you might consider GLMs . If you expect things not to be too far from normal, t-tests may be fine. n very small Because of the problem with getting suitable significance levels, neither permutation tests nor rank tests may be suitable, and at the smallest sizes, a t-test may be the best option (there’s some possibility of slightly robustifying it). However, there’s a good argument for using higher type I error rates with small samples (otherwise you’re letting type II error rates inflate while holding type I error rates constant).
Also see de Winter (2013) $^{[2]}$ . The advice must be modified somewhat when the distributions are both strongly skewed and very discrete, such as Likert scale items where most of the observations are in one of the end categories. Then the Wilcoxon-Mann-Whitney isn’t necessarily a better choice than the t-test. Simulation can help guide choices further when you have some information about likely circumstances. I appreciate this is something of a perennial topic, but most questions concern the questioner's particular data set, sometimes a more general discussion of power, and occasionally what to do if two tests disagree, but I would like a procedure to pick the correct test in the first place! The main problem is how hard it is to check the normality assumption in a small data set: It is difficult to check normality in a small data set, and to some extent that's an important issue, but I think there's another issue of importance that we need to consider. A basic problem is that trying to assess normality as the basis of choosing between tests adversely impacts the properties of the tests you're choosing between. Any formal test for normality would have low power so violations may well not be detected. (Personally I wouldn't test for this purpose, and I'm clearly not alone, but
I've found this little use when clients demand a normality test be performed because that's what their textbook or old lecture notes or some website they found once declare should be done. This is one point where a weightier looking citation would be welcome.) Here’s an example of a reference (there are others) which is unequivocal (Fay and Proschan, 2010 $^{[3]}$ ): The choice between t- and WMW DRs should not be based on a test of normality. They are similarly unequivocal about not testing for equality of variance. To make matters worse, it is unsafe to use the Central Limit Theorem as a safety net: for small n we can't rely on the convenient asymptotic normality of the test statistic and t distribution. Nor even in large samples -- asymptotic normality of the numerator doesn’t imply that the t-statistic will have a t-distribution. However, that may not matter so much, since you should still have asymptotic normality (e.g. CLT for the numerator, and Slutsky’s theorem suggest that eventually the t-statistic should begin to look normal, if the conditions for both hold.) One principled response to this is "safety first": as there's no way to reliably verify the normality assumption on a small sample, run an equivalent non-parametric test instead. That’s actually the advice that the references I mention (or link to mentions of) give. Another approach I've seen but feel less comfortable with, is to perform a visual check and proceed with a t-test if nothing untowards is observed ("no reason to reject normality", ignoring the low power of this check). My personal inclination is to consider whether there are any grounds for assuming normality, theoretical (e.g. variable is sum of several random components and CLT applies) or empirical (e.g. previous studies with larger n suggest variable is normal). Both those are good arguments, especially when backed up with the fact that the t-test is reasonably robust against moderate deviations from normality.
(One should keep in mind, however, that "moderate deviations" is a tricky phrase; certain kinds of deviations from normality may impact the power performace of the t-test quite a bit even though those deviations are visually very small - the t-test is less robust to some deviations than others. We should keep this in mind whenever we're discussing small deviations from normality.) Beware, however, the phrasing "suggest the variable is normal". Being reasonably consistent with normality is not the same thing as normality. We can often reject actual normality with no need even to see the data – for example, if the data cannot be negative, the distribution cannot be normal. Fortunately, what matters is closer to what we might actually have from previous studies or reasoning about how the data are composed, which is that the deviations from normality should be small. If so, I would use a t-test if data passed visual inspection, and otherwise stick to non-parametrics. But any theoretical or empirical grounds usually only justify assuming approximate normality, and on low degrees of freedom it's hard to judge how near normal it needs to be to avoid invalidating a t-test. Well, that’s something we can assess the impact of fairly readily (such as via simulations, as I mentioned earlier). From what I've seen, skewness seems to matter more than heavy tails (but on the other hand I have seen some claims of the opposite - though I don't know what that's based on). For people who see the choice of methods as a trade-off between power and robustness, claims about the asymptotic efficiency of the non-parametric methods are unhelpful. For instance, the rule of thumb that "Wilcoxon tests have about 95% of the power of a t-test if the data really are normal, and are often far more powerful if the data is not, so just use a Wilcoxon" is sometimes heard, but if the 95% only applies to large n, this is flawed reasoning for smaller samples. But we can check small-sample power quite easily! It’s easy enough to simulate to obtain power curves as here . (Again, also see de Winter (2013) $^{[2]}$ ). Having done such simulations under a variety of circumstances, both for the two-sample and one-sample/paired-difference cases, the small sample efficiency at the normal in both cases seems to be a little lower than the asymptotic efficiency, but the efficiency of the signed rank and Wilcoxon-Mann-Whitney tests is still very high even at very small sample sizes. At least that's if the tests are done at the same actual significance level; you can't do a 5% test with very small samples (and least not without randomized tests for example), but if you're prepared to perhaps do (say) a 5.5% or a 3.2% test instead, then the rank tests hold up very well indeed compared with a t-test at that significance level. Small samples may make it very difficult, or impossible, to assess whether a transformation is appropriate for the data since it's hard to tell whether the transformed data belong to a (sufficiently) normal distribution. So if a QQ plot reveals very positively skewed data, which look more reasonable after taking logs, is it safe to use a t-test on the logged data? On larger samples this would be very tempting, but with small n I'd probably hold off unless there had been grounds to expect a log-normal distribution in the first place. There’s another alternative: make a different parametric assumption. For example, if there’s skewed data, one might, for example, in some situations reasonably consider a gamma distribution, or some other skewed family as a better approximation - in moderately large samples, we might just use a GLM, but in very small samples it may be necessary to look to a small-sample test - in many cases simulation can be useful. Alternative 2: robustify the t-test (but taking care about the choice of robust procedure so as not to heavily discretize the resulting distribution of the test statistic) - this has some advantages over a very-small-sample nonparametric procedure such as the ability to consider tests with low type I error rate. Here I'm thinking along the lines of using say M-estimators of location (and related estimators of scale) in the t-statistic to smoothly robustify against deviations from normality. Something akin to the Welch, like: $$\frac{\stackrel{\sim}{x}-\stackrel{\sim}{y}}{\stackrel{\sim}{S}_p}$$ where $\stackrel{\sim}{S}_p^2=\frac{\stackrel{\sim}{s}_x^2}{n_x}+\frac{\stackrel{\sim}{s}_y^2}{n_y}$ and $\stackrel{\sim}{x}$ , $\stackrel{\sim}{s}_x$ etc being robust estimates of location and scale respectively. I'd aim to reduce any tendency of the statistic to discreteness - so I'd avoid things like trimming and Winsorizing, since if the original data were discrete, trimming etc will exacerbate this; by using M-estimation type approaches with a smooth $\psi$ -function you achieve similar effects without contributing to the discreteness. Keep in mind we're trying to deal with the situation where $n$ is very small indeed (around 3-5, in each sample, say), so even M-estimation potentially has its issues. You could, for example, use simulation at the normal to get p-values (if sample sizes are very small, I'd suggest that over bootstrapping - if sample sizes aren't so small, a carefully-implemented bootstrap may do quite well, but then we might as well go back to Wilcoxon-Mann-Whitney). There's be a scaling factor as well as a d.f. adjustment to get to what I'd imagine would then be a reasonable t-approximation. This means we should get the kind of properties we seek very close to the normal, and should have reasonable robustness in the broad vicinity of the normal. There are a number of issues that come up that would be outside the scope of the present question, but I think in very small samples the benefits should outweigh the costs and the extra effort required. [I haven't read the literature on this stuff for a very long time, so I don't have suitable references to offer on that score.] Of course if you didn't expect the distribution to be somewhat normal-like, but rather similar to some other distribution, you could undertake a suitable robustification of a different parametric test. What if you want to check assumptions for the non-parametrics? Some sources recommend verifying a symmetric distribution before applying a Wilcoxon test, which brings up similar problems to checking normality. Indeed. I assume you mean the signed rank test*. In the case of using it on paired data, if you are prepared to assume that the two distributions are the same shape apart from location shift you are safe, since the differences should then be symmetric. Actually, we don't even need that much; for the test to work you need symmetry under the null; it's not required under the alternative (e.g. consider a paired situation with identically-shaped right skewed continuous distributions on the positive half-line, where the scales differ under the alternative but not under the null; the signed rank test should work essentially as expected in that case). The interpretation of the test is easier if the alternative is a location shift though. *(Wilcoxon’s name is associated with both the one and two sample rank tests – signed rank and rank sum; with their U test, Mann and Whitney generalized the situation studied by Wilcoxon, and introduced important new ideas for evaluating the null distribution, but the priority between the two sets of authors on the Wilcoxon-Mann-Whitney is clearly Wilcoxon’s -- so at least if we only consider Wilcoxon vs Mann&Whitney, Wilcoxon goes first in my book. However, it seems Stigler's Law beats me yet again, and Wilcoxon should perhaps share some of that priority with a number of earlier contributors, and (besides Mann and Whitney) should share credit with several discoverers of an equivalent test.[4][5] ) References [1]: Zimmerman DW and Zumbo BN, (1993), Rank transformations and the power of the Student t-test and Welch t′-test for non-normal populations, Canadian Journal Experimental Psychology, 47 : 523–39. [2]: J.C.F. de Winter (2013), "Using the Student’s t-test with extremely small sample sizes," Practical Assessment, Research and Evaluation , 18 :10, August, ISSN 1531-7714 http://pareonline.net/getvn.asp?v=18&n=10 [3]: Michael P. Fay and Michael A. Proschan (2010), "Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules," Stat Surv ; 4 : 1–39. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2857732/ [4]: Berry, K.J., Mielke, P.W. and Johnston, J.E. (2012), "The Two-sample Rank-sum Test: Early Development," Electronic Journal for History of Probability and Statistics , Vol.8, December pdf [5]: Kruskal, W. H. (1957), "Historical notes on the Wilcoxon unpaired two-sample test," Journal of the American Statistical Association , 52 , 356–360. | {
"source": [
"https://stats.stackexchange.com/questions/121860",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/52570/"
]
} |
122,009 | I would like to extract the slopes for each individual in a mixed effect model, as outlined in the following paragraph Mixed effects models were used to characterize individual paths of change in the cognitive summary measures, including terms for age, sex, and years of education as fixed effects (Laird and Ware, 1982; Wilson et al., 2000, 2002c).... Residual, individual cognitive decline slope terms were extracted from the mixed models, after adjustment for the effects of age, sex, and education. Person-specific, adjusted residual slopes were then used as a quantitative outcome phenotype for the genetic association analyses. These estimates equate to the difference between an individual’s slope and the predicted slope of an individual of the same age, sex, and education level. De Jager, P. L., Shulman, J. M., Chibnik, L. B., Keenan, B. T., Raj, T., Wilson, R. S., et al. (2012). A genome-wide scan for common variants affecting the rate of age-related cognitive decline . Neurobiology of Aging, 33(5), 1017.e1–1017.e15. I have looked at using the coef function to extract the coefficients for each individual, but I am unsure if this is the correct approach to be using. Can anyone provide some advice on how to do this? #example R code
library(lme4)
attach(sleepstudy)
fml <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
beta <- coef(fml)$Subject
colnames(beta) <- c("Intercept", "Slope")
beta
summary(beta)
summary(fm1) | The model: library(lme4)
data(sleepstudy)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy) The function coef is the right approach for extracting individual
differences. > coef(fm1)$Subject
(Intercept) Days
308 253.6637 19.6662581
309 211.0065 1.8475834
310 212.4449 5.0184067
330 275.0956 5.6529540
331 273.6653 7.3973908
332 260.4446 10.1951151
333 268.2455 10.2436611
334 244.1725 11.5418622
335 251.0714 -0.2848735
337 286.2955 19.0955694
349 226.1950 11.6407008
350 238.3351 17.0814915
351 255.9829 7.4520286
352 272.2687 14.0032989
369 254.6806 11.3395025
370 225.7922 15.2897513
371 252.2121 9.4791308
372 263.7196 11.7513155 These values are a combination of the fixed effects and the variance components
(random effects). You can use summary and coef to obtain the coefficients
of the fixed effects. > coef(summary(fm1))[ , "Estimate"]
(Intercept) Days
251.40510 10.46729 The intercept is 251.4 and the slope (associated with Days ) is 10.4.
These coeffcients are the mean of all subjects. To obtain the random effects,
you can use ranef . > ranef(fm1)$Subject
(Intercept) Days
308 2.2585637 9.1989722
309 -40.3985802 -8.6197026
310 -38.9602496 -5.4488792
330 23.6905025 -4.8143320
331 22.2602062 -3.0698952
332 9.0395271 -0.2721709
333 16.8404333 -0.2236248
334 -7.2325803 1.0745763
335 -0.3336936 -10.7521594
337 34.8903534 8.6282835
349 -25.2101138 1.1734148
350 -13.0699598 6.6142055
351 4.5778364 -3.0152574
352 20.8635944 3.5360130
369 3.2754532 0.8722166
370 -25.6128737 4.8224653
371 0.8070401 -0.9881551
372 12.3145406 1.2840295 These values are the variance components of the subjects. Every row
corresponds to one subject. Inherently the mean of each column is zero since
the values correspond to the differences in relation to the fixed effects. > colMeans(ranef(fm1)$Subject)
(Intercept) Days
4.092529e-13 -2.000283e-13 Note that these values are equal to zero, deviations are due to imprecision
of floating point number representation. The result of coef(fm1)$Subject incoporates the fixed effects into
the random effects, i.e., the fixed effect coefficients are added to
the random effects. The results are individual intercepts and slopes. | {
"source": [
"https://stats.stackexchange.com/questions/122009",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/46694/"
]
} |
122,062 | Consider the following three phenomena. Stein's paradox: given some data from multivariate normal distribution in $\mathbb R^n, \: n\ge 3$, sample mean is not a very good estimator of the true mean. One can obtain an estimation with lower mean squared error if one shrinks all the coordinates of the sample mean towards zero [or towards their mean, or actually towards any value, if I understand correctly]. NB: usually Stein's paradox is formulated via considering only one single data point from $\mathbb R^n$; please correct me if this is crucial and my formulation above is not correct. Ridge regression: given some dependent variable $\mathbf y$ and some independent variables $\mathbf X$, the standard regression $\beta = (\mathbf X^\top \mathbf X)^{-1} \mathbf X^\top \mathbf y$ tends to overfit the data and lead to poor out-of-sample performance. One can often reduce overfitting by shrinking $\beta$ towards zero: $\beta = (\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1} \mathbf X^\top \mathbf y$. Random effects in multilevel/mixed models: given some dependent variable $y$ (e.g. student's height) that depends on some categorical predictors (e.g. school id and student's gender), one is often advised to treat some predictors as 'random', i.e. to suppose that the mean student's height in each school comes from some underlying normal distribution. This results in shrinking the estimations of mean height per school towards the global mean. I have a feeling that all of this are various aspects of the same "shrinkage" phenomenon, but I am not sure and certainly lack a good intuition about it. So my main question is: is there indeed a deep similarity between these three things, or is it only a superficial semblance? What is the common theme here? What is the correct intuition about it? In addition, here are some pieces of this puzzle that don't really fit together for me: In ridge regression, $\beta$ is not shrunk uniformly; ridge shrinkage is actually related to singular value decomposition of $\mathbf X$, with low-variance directions being shrunk more (see e.g. The Elements of Statistical Learning 3.4.1). But James-Stein estimator simply takes the sample mean and multiplies it by one scaling factor. How does that fit together? Update: see James-Stein Estimator with unequal variances and e.g. here regarding variances of $\beta$ coefficients. Sample mean is optimal in dimensions below 3. Does it mean that when there are only one or two predictors in the regression model, ridge regression will always be worse than ordinary least squares? Actually, come to think of it, I cannot imagine a situation in 1D (i.e. simple, non-multiple regression) where ridge shrinkage would be beneficial... Update: No. See Under exactly what conditions is ridge regression able to provide an improvement over ordinary least squares regression? On the other hand, sample mean is always suboptimal in dimensions above 3. Does it mean that with more than 3 predictors ridge regression is always better than OLS, even if all the predictors are uncorrelated (orthogonal)? Usually ridge regression is motivated by multicollinearity and the need to "stabilize" the $(\mathbf X^\top \mathbf X)^{-1}$ term. Update: Yes! See the same thread as above. There are often some heated discussion about whether various factors in ANOVA should be included as fixed or random effects. Shouldn't we, by the same logic, always treat a factor as random if it has more than two levels (or if there are more than two factors? now I am confused)? Update: ? Update: I got some excellent answers, but none provides enough of a big picture, so I will let the question "open". I can promise to award a bounty of at least 100 points to a new answer that will surpass the existing ones. I am mostly looking for a unifying view that could explain how the general phenomenon of shrinkage manifests itself in these various contexts and point out the principal differences between them. | Connection between James–Stein estimator and ridge regression Let $\mathbf y$ be a vector of observation of $\boldsymbol \theta$ of length $m$, ${\mathbf y} \sim N({\boldsymbol \theta}, \sigma^2 I)$, the James-Stein estimator is,
$$\widehat{\boldsymbol \theta}_{JS} =
\left( 1 - \frac{(m-2) \sigma^2}{\|{\mathbf y}\|^2} \right) {\mathbf y}.$$
In terms of ridge regression, we can estimate $\boldsymbol \theta$ via $\min_{\boldsymbol{\theta}} \|\mathbf{y}-\boldsymbol{\theta}\|^2 + \lambda\|\boldsymbol{\theta}\|^2 ,$
where the solution is $$\widehat{\boldsymbol \theta}_{\mathrm{ridge}} = \frac{1}{1+\lambda}\mathbf y.$$
It is easy to see that the two estimators are in the same form, but we need to estimate $\sigma^2$ in James-Stein estimator, and determine $\lambda$ in ridge regression via cross-validation. Connection between James–Stein estimator and random effects models Let us discuss the mixed/random effects models in genetics first. The model is $$\mathbf {y}=\mathbf {X}\boldsymbol{\beta} + \boldsymbol{Z\theta}+\mathbf {e},
\boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I),
\textbf{e}\sim N(\mathbf{0},\sigma^2 I).$$
If there is no fixed effects and $\mathbf {Z}=I$, the model becomes
$$\mathbf {y}=\boldsymbol{\theta}+\mathbf {e},
\boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I),
\textbf{e}\sim N(\mathbf{0},\sigma^2 I),$$
which is equivalent to the setting of James-Stein estimator, with some Bayesian idea. Connection between random effects models and ridge regression If we focus on the random effects models above,
$$\mathbf {y}=\mathbf {Z\theta}+\mathbf {e},
\boldsymbol{\theta}\sim N(\mathbf{0},\sigma^2_{\theta} I),
\textbf{e}\sim N(\mathbf{0},\sigma^2 I).$$
The estimation is equivalent to solve the problem
$$\min_{\boldsymbol{\theta}} \|\mathbf{y}-\mathbf {Z\theta}\|^2 + \lambda\|\boldsymbol{\theta}\|^2$$
when $\lambda=\sigma^2/\sigma_{\theta}^2$. The proof can be found in Chapter 3 of Pattern recognition and machine learning . Connection between (multilevel) random effects models and that in genetics In the random effects model above, the dimension of $\mathbf y$ is $m\times 1,$ and that of $\mathbf Z$ is $m \times p$. If we vectorize $\mathbf Z$ as $(mp)\times 1,$ and repeat $\mathbf y$ correspondingly, then we have the hierarchical/clustered structure, $p$ clusters and each with $m$ units. If we regress $\mathrm{vec}(\mathbf Z)$ on repeated $\mathbf y$, then we can obtain the random effect of $Z$ on $y$ for each cluster, though it is kind of like reverse regression. Acknowledgement : the first three points are largely learned from these two Chinese articles, 1 , 2 . | {
"source": [
"https://stats.stackexchange.com/questions/122062",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28666/"
]
} |
122,213 | What are the differences in inferences that can be made from a latent class analysis (LCA) versus a cluster analysis? Is it correct that a LCA assumes an underlying latent variable that gives rise to the classes, whereas the cluster analysis is an empirical description of correlated attributes from a clustering algorithm? It seems that in the social sciences, the LCA has gained popularity and is considered methodologically superior given that it has a formal chi-square significance test, which the cluster analysis does not. It would be great if examples could be offered in the form of, "LCA would be appropriate for this (but not cluster analysis), and cluster analysis would be appropriate for this (but not latent class analysis). Thanks!
Brian | Latent Class Analysis is in fact an Finite Mixture Model (see here ). The main difference between FMM and other clustering algorithms is that FMM's offer you a "model-based clustering" approach that derives clusters using a probabilistic model that describes distribution of your data. So instead of finding clusters with some arbitrary chosen distance measure, you use a model that describes distribution of your data and based on this model you assess probabilities that certain cases are members of certain latent classes. So you could say that it is a top-down approach (you start with describing distribution of your data) while other clustering algorithms are rather bottom-up approaches (you find similarities between cases). Because you use a statistical model for your data model selection and assessing goodness of fit are possible - contrary to clustering. Also, if you assume that there is some process or "latent structure" that underlies structure of your data then FMM's seem to be a appropriate choice since they enable you to model the latent structure behind your data (rather then just looking for similarities). Other difference is that FMM's are more flexible than clustering. Clustering algorithms just do clustering, while there are FMM- and LCA-based models that enable you to do confirmatory, between-groups analysis, combine Item Response Theory (and other) models with LCA, include covariates to predict individuals' latent class membership, and/or even within-cluster regression models in latent-class regression , enable you to model changes over time in structure of your data etc. For more examples see: Hagenaars J.A. & McCutcheon, A.L. (2009). Applied Latent Class
Analysis. Cambridge University Press. and the documentation of flexmix and poLCA packages in R, including the following papers: Linzer, D. A., & Lewis, J. B. (2011). poLCA: An R package for
polytomous variable latent class analysis. Journal of Statistical
Software, 42(10), 1-29. Leisch, F. (2004). Flexmix: A general framework for finite mixture
models and latent glass regression in R. Journal of Statistical
Software, 11(8), 1-18. Grün, B., & Leisch, F. (2008). FlexMix version 2: finite mixtures with
concomitant variables and varying and constant parameters . Journal of
Statistical Software, 28(4), 1-35. | {
"source": [
"https://stats.stackexchange.com/questions/122213",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/48088/"
]
} |
122,225 | Despite having seen these terms 502847894789 times, I cannot for the life of me remember the difference between sensitivity, specificity, precision, accuracy, and recall. They're pretty simple concepts, but the names are highly unintuitive to me, so I keep getting them confused with each other. What is a good way to think about these concepts so the names start making sense? Put another way, why were these names chosen for these concepts, as opposed to some other names? | For precision and recall, each is the true positive (TP) as the numerator divided by a different denominator. P recision: TP / P redicted positive R ecall: TP / R eal positive | {
"source": [
"https://stats.stackexchange.com/questions/122225",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59374/"
]
} |
122,409 | Suppose I want to learn a classifier that predicts if an email is spam. And suppose only 1% of emails are spam. The easiest thing to do would be to learn the trivial classifier that says none of the emails are spam. This classifier would give us 99% accuracy, but it wouldn't learn anything interesting, and would have a 100% rate of false negatives. To solve this problem, people have told me to "downsample", or learn on a subset of the data where 50% of the examples are spam and 50% are not spam. But I'm worried about this approach, since once we build this classifier and start using it on a real corpus of emails (as opposed to a 50/50 test set), it may predict that a lot of emails are spam when they're really not. Just because it's used to seeing much more spam than there actually is in the dataset. So how do we fix this problem? ("Upsampling," or repeating the positive training examples multiple times so 50% of the data is positive training examples, seems to suffer from similar problems.) | Most classification models in fact don't yield a binary decision, but rather a continuous decision value (for instance, logistic regression models output a probability, SVMs output a signed distance to the hyperplane, ...). Using the decision values we can rank test samples, from 'almost certainly positive' to 'almost certainly negative'. Based on the decision value, you can always assign some cutoff that configures the classifier in such a way that a certain fraction of data is labeled as positive. Determining an appropriate threshold can be done via the model's ROC or PR curves. You can play with the decision threshold regardless of the balance used in the training set. In other words, techniques like up -or downsampling are orthogonal to this. Assuming the model is better than random, you can intuitively see that increasing the threshold for positive classification (which leads to less positive predictions) increases the model's precision at the cost of lower recall and vice versa. Consider SVM as an intuitive example: the main challenge is to learn the orientation of the separating hyperplane. Up -or downsampling can help with this (I recommend preferring upsampling over downsampling). When the orientation of the hyperplane is good, we can play with the decision threshold (e.g. signed distance to the hyperplane) to get a desired fraction of positive predictions. | {
"source": [
"https://stats.stackexchange.com/questions/122409",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59374/"
]
} |
122,421 | I have a 3D dataset with at least millions of data points (scatter events from atoms, approximately Gaussian). I am modeling this data with a Gaussian Mixture Model. The usual approach would be to optimize the GMM with the EM algorithm. However, I have substantial priors (e.g., the data is produced by a polymer made of light atoms) so the only way to find the correct GMM is via sampling. Here's the problem: it's very expensive to compare a GMM to the data $D$, and I have to do that evaluation frequently. I would like to introduce an intermediate step: first model $G_d$ without priors, and then introduce priors and re-model $G_m$ where both are GMMs. Then while sampling $G_m$ I could just compare it to $G_d$ instead of $D$, which is a lot cheaper. So $G_d$ is kind of a data reduction. Intuitively the likelihood function for $G_m$ should be: $$
p(D|G_m)\approx p(D|G_d)S(G_d|G_m)
$$ where $S(G_d|G_m)$ is some kind of scoring function (maybe K-L divergence? correlation?).
My questions are: Can you re-model in this way and still get a good approximation of the likelihood? What types of functions can be used for $S(G_d|G_m)$ and still get a proper PDF? Does the choice of number of components in $G_d$ affect the final likelihood? Below I attempt to illustrate this problem. The magenta $G_d$ is computed at the beginning. Then the red $G_m$ with its additional priors (including size and linearity) are fit to the magenta one, much faster than comparing to the data itself. Quick clarification : the number of Gaussians in $G_m$ is much higher than $G_d$. That's because the only prior on $G_d$ is the Dirichlet one (so it's just regular Bayesian GMM fitting) whereas $G_m$ has much more detailed priors which enables us to make a more detailed model than the data itself would indicate. | Most classification models in fact don't yield a binary decision, but rather a continuous decision value (for instance, logistic regression models output a probability, SVMs output a signed distance to the hyperplane, ...). Using the decision values we can rank test samples, from 'almost certainly positive' to 'almost certainly negative'. Based on the decision value, you can always assign some cutoff that configures the classifier in such a way that a certain fraction of data is labeled as positive. Determining an appropriate threshold can be done via the model's ROC or PR curves. You can play with the decision threshold regardless of the balance used in the training set. In other words, techniques like up -or downsampling are orthogonal to this. Assuming the model is better than random, you can intuitively see that increasing the threshold for positive classification (which leads to less positive predictions) increases the model's precision at the cost of lower recall and vice versa. Consider SVM as an intuitive example: the main challenge is to learn the orientation of the separating hyperplane. Up -or downsampling can help with this (I recommend preferring upsampling over downsampling). When the orientation of the hyperplane is good, we can play with the decision threshold (e.g. signed distance to the hyperplane) to get a desired fraction of positive predictions. | {
"source": [
"https://stats.stackexchange.com/questions/122421",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59667/"
]
} |
122,722 | A few years ago I designed a radiation detector that works by measuring the interval between events rather than counting them. My assumption was, that when measuring non-contiguous samples, on average I would measure half of the actual interval. However when I tested the circuit with a calibrated source the reading was a factor of two too high which meant I had been measuring the full interval. In an old book on probability and statistics I found a section about something called "The Waiting Paradox". It presented an example in which a bus arrives at the bus stop every 15 minutes and a passenger arrives at random, it stated that the passenger would on average wait the full 15 minutes. I have never been able to understand the math presented with the example and continue to look for an explanation. If someone can explain why it is so that the passenger waits the full interval I will sleep better. | As Glen_b pointed out, if the buses arrive every $15$ minutes without any uncertainty whatsoever , we know that the maximum possible waiting time is $15$ minutes. If from our part we arrive "at random", we feel that "on average" we will wait half the maximum possible waiting time . And the maximum possible waiting time is here equal to the maximum possible length between two consecutive arrivals. Denote our waiting time $W$ and the maximum length between two consecutive bus arrivals $R$, and we argue that $$ E(W) = \frac 12 R = \frac {15}{2} = 7.5 \tag{1}$$ and we are right. But suddenly certainty is taken away from us and we are told that $15$ minutes is now the average length between two bus arrivals. And we fall into the "intuitive thinking trap" and think: "we only need to replace $R$ with its expected value", and we argue $$ E(W) = \frac 12 E(R) = \frac {15}{2} = 7.5\;\;\; \text{WRONG} \tag{2}$$ A first indication that we are wrong, is that $R$ is not "length between any two consecutive bus-arrivals", it is " maximum length etc". So in any case, we have that $E(R) \neq 15$. How did we arrive at equation $(1)$? We thought:"waiting time can be from $0$ to $15$ maximum . I arrive with equal probability at any instance, so I "choose" randomly and with equal probability all possible waiting times. Hence half the maximum length between two consecutive bus arrivals is my average waiting time". And we are right. But by mistakenly inserting the value $15$ in equation $(2)$, it no longer reflects our behavior. With $15$ in place of $E(R)$, equation $(2)$ says "I choose randomly and with equal probability all possible waiting times that are smaller or equal to the average length between two consecutive bus-arrivals " -and here is where our intuitive mistake lies, because, our behavior has not change - so, by arriving randomly uniformly, we in reality still "choose randomly and with equal probability" all possible waiting times - but "all possible waiting times" is not captured by $15$ - we have forgotten the right tail of the distribution of lengths between two consecutive bus-arrivals. So perhaps, we should calculate the expected value of the maximum length between any two consecutive bus arrivals, is this the correct solution? Yes it could be, but : the specific "paradox" goes hand-in-hand with a specific stochastic assumption: that bus-arrivals are modeled by the benchmark Poisson process, which means that as a consequence we assume that the time-length between any two consecutive bus-arrivals follows an Exponential distribution. Denote $\ell$ that length, and we have that $$f_{\ell}(\ell) = \lambda e^{-\lambda \ell},\;\; \lambda = 1/15,\;\; E(\ell) = 15$$ This is approximate of course, since the Exponential distribution has unbounded support from the right, meaning that strictly speaking "all possible waiting times" include, under this modeling assumption, larger and large magnitudes up to and "including" infinity, but with vanishing probability. But wait, the Exponential is memoryless : no matter at what point in time we will arrive, we face the same random variable , irrespective of what has gone before. Given this stochastic/distributional assumption, any point in time is part of an "interval between two consecutive bus-arrivals" whose length is described by the same probability distribution with expected value (not maximum value) $15$: "I am here, I am surrounded by an interval between two bus-arrivals. Some of its length lies in the past and some in the future but I have no way of knowing how much and how much, so the best I can do is ask What is its expected length -which will be my average waiting time?" - And the answer is always "$15$", alas. | {
"source": [
"https://stats.stackexchange.com/questions/122722",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60007/"
]
} |
122,917 | I came across a casual remark on The Chemical Statistician that a sample median could often be a choice for a sufficient statistic but, besides the obvious case of one or two observations where it equals the sample mean, I cannot think of another non-trivial and iid case where the sample median is sufficient. | In the case when the support of the distribution does not depend on the unknown parameter $\theta, $ we can invoke the (Fréchet-Darmois-)Pitman-Koopman theorem, namely that the density of the observations is necessarily of the exponential family form, $$
\exp\{ \theta T(x) - \psi(\theta) \}h(x)
$$ to conclude that, since the natural sufficient statistic $$
S=\sum_{i=1}^n T(x_i)
$$ is also minimal sufficient, then the median should be a function of $S$ , and the other way as well, which is impossible: modifying an extreme in the observations $x_1,\ldots,x_n$ , $n>2$ , modifies $S$ but does not modify the median. Therefore, the median cannot be sufficient when $n>2$ . In the alternative case when the support of the distribution does depend on the unknown parameter $θ$ , I am less happy with the following proof: first, we can wlog consider the simple case when $$
f(x|\theta) = h(x) \mathbb{I}_{A_\theta}(x) \tau(\theta)
$$ where the set $A_\theta$ indexed by $θ$ denotes the support of $f(\cdot|\theta)$ . In that case, assuming the median is sufficient, the factorisation theorem implies that we have that $$
\prod_{i=1}^n \mathbb{I}_{A_\theta}(x_i)
$$ is a binary ( $0-1$ ) function of the sample median $$
\prod_{i=1}^n \mathbb{I}_{A_\theta}(x_i) = \mathbb{I}_{B^n_\theta}(\text{med}(x_{1:n}))
$$ Indeed, there is no extra term in the factorisation since it should also be (i) a binary function of the data and (ii) independent from $\theta$ .
Adding a further observation $x_{n+1}$ which value is such that it does not modify the sample median then leads to a contradiction since it may be in or outside the support set, while $$
\mathbb{I}_{B^{n+1}_\theta}(\text{med}(x_{1:n+1}))=\mathbb{I}_{B^n_\theta}(\text{med}(x_{1:n}))\times \mathbb{I}_{A_\theta}(x_{n+1}).
$$ | {
"source": [
"https://stats.stackexchange.com/questions/122917",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7224/"
]
} |
123,060 | I have the following problem at hand: I have a very long list of words, possibly names, surnames, etc. I need to cluster this word list, such that similar words, for example words with similar edit (Levenshtein) distance appears in the same cluster. For example "algorithm" and "alogrithm" should have high chances to appear in the same cluster. I am well aware of the classical unsupervised clustering methods like k-means clustering, EM clustering in the Pattern Recognition literature. The problem here is that these methods work on points which reside in a vector space. I have words of strings at my hand here. It seems that, the question of how to represent strings in a numerical vector space and to calculate "means" of string clusters is not sufficiently answered, according to my survey efforts until now. A naive approach to attack this problem would be to combine k-Means clustering with Levenshtein distance, but the question still remains "How to represent "means" of strings?". There is a weight called as TF-IDF weight, but it seems that it is mostly related to the area of "text document" clustering, not for the clustering of single words. It seems that there are some special string clustering algorithms existing, like the one at http://pike.psu.edu/cleandb06/papers/CameraReady_120.pdf My search in this area is going on still, but I wanted to get ideas from here as well. What would you do recommend in this case, is anyone aware of any methods for this kind of problem? | Seconding @micans recommendation for Affinity Propagation . From the paper: L Frey, Brendan J., and Delbert Dueck. "Clustering by passing messages between data points." science 315.5814 (2007): 972-976. . It's super easy to use via many packages.
It works on anything you can define the pairwise similarity on. Which you can get by multiplying the Levenshtein distance by -1. I threw together a quick example using the first paragraph of your question as input. In Python 3: import numpy as np
from sklearn.cluster import AffinityPropagation
import distance
words = "YOUR WORDS HERE".split(" ") #Replace this line
words = np.asarray(words) #So that indexing with a list will work
lev_similarity = -1*np.array([[distance.levenshtein(w1,w2) for w1 in words] for w2 in words])
affprop = AffinityPropagation(affinity="precomputed", damping=0.5)
affprop.fit(lev_similarity)
for cluster_id in np.unique(affprop.labels_):
exemplar = words[affprop.cluster_centers_indices_[cluster_id]]
cluster = np.unique(words[np.nonzero(affprop.labels_==cluster_id)])
cluster_str = ", ".join(cluster)
print(" - *%s:* %s" % (exemplar, cluster_str)) Output was (exemplars in italics to the left of the cluster they are exemplar of): have: chances, edit, hand, have, high following: following problem: problem I: I, a, at, etc, in, list, of possibly: possibly cluster: cluster word: For, and, for, long, need, should, very, word, words similar: similar Levenshtein: Levenshtein distance: distance the: that, the, this, to, with same: example, list, names, same, such, surnames algorithm: algorithm, alogrithm appear: appear, appears Running it on a list of 50 random first names : Diane: Deana, Diane, Dionne, Gerald, Irina, Lisette, Minna, Nicki, Ricki Jani: Clair, Jani, Jason, Jc, Kimi, Lang, Marcus, Maxima, Randi, Raul Verline: Destiny, Kellye, Marylin, Mercedes, Sterling, Verline Glenn: Elenor, Glenn, Gwenda Armandina: Armandina, Augustina Shiela: Ahmed, Estella, Milissa, Shiela, Thresa, Wynell Laureen: Autumn, Haydee, Laureen, Lauren Alberto: Albertha, Alberto, Robert Lore: Ammie, Doreen, Eura, Josef, Lore, Lori, Porter Looks pretty great to me (that was fun). | {
"source": [
"https://stats.stackexchange.com/questions/123060",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31611/"
]
} |
123,063 | In some disciplines, PCA (principal component analysis) is systematically used without any justification, and PCA and EFA (exploratory factor analysis) are considered as synonyms. I therefore recently used PCA to analyse the results of a scale validation study (21 items on 7-points Likert scale, assumed to compose 3 factors of 7 items each) and a reviewer asks me why I chose PCA instead of EFA. I read about the differences between both techniques, and it seems that EFA is favored against PCA in a majority of your answers here. Do you have any good reasons for why PCA would be a better choice? What benefits it could provide and why it could be a wise choice in my case? | Disclaimer: @ttnphns is very knowledgeable about both PCA and FA, and I respect his opinion and have learned a lot from many of his great answers on the topic. However, I tend to disagree with his reply here, as well as with other (numerous) posts on this topic here on CV, not only his; or rather, I think they have limited applicability. I think that the difference between PCA and FA is overrated. Look at it like that: both methods attempt to provide a low-rank approximation of a given covariance (or correlation) matrix. "Low-rank" means that only a limited (low) number of latent factors or principal components is used. If the $n \times n$ covariance matrix of the data is $\mathbf C$, then the models are: \begin{align}
\mathrm{PCA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top \\
\mathrm{PPCA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top + \sigma^2 \mathbf I \\
\mathrm{FA:} &\:\:\: \mathbf C \approx \mathbf W \mathbf W^\top + \boldsymbol \Psi
\end{align} Here $\mathbf W$ is a matrix with $k$ columns (where $k$ is usually chosen to be a small number, $k<n$), representing $k$ principal components or factors, $\mathbf I$ is an identity matrix, and $\boldsymbol \Psi$ is a diagonal matrix. Each method can be formulated as finding $\mathbf W$ (and the rest) minimizing the [norm of the] difference between left-hand and right-hand sides. PPCA stands for probabilistic PCA , and if you don't know what that is, it does not matter so much for now. I wanted to mention it, because it neatly fits between PCA and FA, having intermediate model complexity. It also puts the allegedly large difference between PCA and FA into perspective: even though it is a probabilistic model (exactly like FA), it actually turns out to be almost equivalent to PCA ($\mathbf W$ spans the same subspace). Most importantly, note that the models only differ in how they treat the diagonal of $\mathbf C$. As the dimensionality $n$ increases, the diagonal becomes in a way less and less important (because there are only $n$ elements on the diagonal and $n(n-1)/2 = \mathcal O (n^2)$ elements off the diagonal). As a result, for the large $n$ there is usually not much of a difference between PCA and FA at all, an observation that is rarely appreciated. For small $n$ they can indeed differ a lot. Now to answer your main question as to why people in some disciplines seem to prefer PCA. I guess it boils down to the fact that it is mathematically a lot easier than FA (this is not obvious from the above formulas, so you have to believe me here): PCA -- as well as PPCA, which is only slightly different, -- has an analytic solution, whereas FA does not. So FA needs to be numerically fit, there exist various algorithms of doing it, giving possibly different answers and operating under different assumptions, etc. etc. In some cases some algorithms can get stuck (see e.g. "heywood cases"). For PCA you perform an eigen-decomposition and you are done; FA is a lot more messy. Technically, PCA simply rotates the variables, and that is why one can refer to it as a mere transformation, as @NickCox did in his comment above. PCA solution does not depend on $k$: you can find first three PCs ($k=3$) and the first two of those are going to be identical to the ones you would find if you initially set $k=2$. That is not true for FA: solution for $k=2$ is not necessarily contained inside the solution for $k=3$. This is counter-intuitive and confusing. Of course FA is more flexible model than PCA (after all, it has more parameters) and can often be more useful. I am not arguing against that. What I am arguing against, is the claim that they are conceptually very different with PCA being about "describing the data" and FA being about "finding latent variables". I just do not see this is as true [almost] at all. To comment on some specific points mentioned above and in the linked answers: "in PCA the number of dimensions to extract/retain is fundamentally subjective, while in EFA the number is fixed, and you usually have to check several solutions" -- well, the choice of the solution is still subjective, so I don't see any conceptual difference here. In both cases, $k$ is (subjectively or objectively) chosen to optimize the trade-off between model fit and model complexity. "FA is able to explain pairwise correlations (covariances). PCA generally cannot do it" -- not really, both of them explain correlations better and better as $k$ grows. Sometimes extra confusion arises (but not in @ttnphns's answers!) due to the different practices in the disciplines using PCA and FA. For example, it is a common practice to rotate factors in FA to improve interpretability. This is rarely done after PCA, but in principle nothing is preventing it. So people often tend to think that FA gives you something "interpretable" and PCA does not, but this is often an illusion. Finally, let me stress again that for very small $n$ the differences between PCA and FA can indeed be large, and maybe some of the claims in favour of FA are done with small $n$ in mind. As an extreme example, for $n=2$ a single factor can always perfectly explain the correlation, but one PC can fail to do it quite badly. Update 1: generative models of the data You can see from the number of comments that what I am saying is taken to be controversial. At the risk of flooding the comment section even further, here are some remarks regarding "models" (see comments by @ttnphns and @gung). @ttnphns does not like that I used the word "model" [of the covariance matrix] to refer to the approximations above; it is an issue of terminology, but what he calls "models" are probabilistic/generative models of the data : \begin{align}
\mathrm{PPCA}: &\:\:\: \mathbf x = \mathbf W \mathbf z + \boldsymbol \mu + \boldsymbol \epsilon, \; \boldsymbol \epsilon \sim \mathcal N(0, \sigma^2 \mathbf I) \\
\mathrm{FA}: &\:\:\: \mathbf x = \mathbf W \mathbf z + \boldsymbol \mu + \boldsymbol \epsilon, \; \boldsymbol \epsilon \sim \mathcal N(0, \boldsymbol \Psi)
\end{align} Note that PCA is not a probabilistic model, and cannot be formulated in this way. The difference between PPCA and FA is in the noise term: PPCA assumes the same noise variance $\sigma^2$ for each variable, whereas FA assumes different variances $\Psi_{ii}$ ("uniquenesses"). This minor difference has important consequences. Both models can be fit with a general expectation-maximization algorithm. For FA no analytic solution is known, but for PPCA one can analytically derive the solution that EM will converge to (both $\sigma^2$ and $\mathbf W$). Turns out, $\mathbf W_\mathrm{PPCA}$ has columns in the same direction but with a smaller length than standard PCA loadings $\mathbf W_\mathrm{PCA}$ (I omit exact formulas). For that reason I think of PPCA as "almost" PCA: $\mathbf W$ in both cases span the same "principal subspace". The proof ( Tipping and Bishop 1999 ) is a bit technical; the intuitive reason for why homogeneous noise variance leads to a much simpler solution is that $\mathbf C - \sigma^2 \mathbf I$ has the same eigenvectors as $\mathbf C$ for any value of $\sigma^2$, but this is not true for $\mathbf C - \boldsymbol \Psi$. So yes, @gung and @ttnphns are right in that FA is based on a generative model and PCA is not, but I think it is important to add that PPCA is also based on a generative model, but is "almost" equivalent to PCA. Then it ceases to seem such an important difference. Update 2: how come PCA provides best approximation to the covariance matrix, when it is well-known to be looking for maximal variance? PCA has two equivalent formulations: e.g. first PC is (a) the one maximizing the variance of the projection and (b) the one providing minimal reconstruction error. More abstractly, the equivalence between maximizing variance and minimizing reconstruction error can be seen using Eckart-Young theorem . If $\mathbf X$ is the data matrix (with observations as rows, variables as columns, and columns are assumed to be centered) and its SVD decomposition is $\mathbf X=\mathbf U\mathbf S\mathbf V^\top$, then it is well known that columns of $\mathbf V$ are eigenvectors of the scatter matrix (or covariance matrix, if divided by the number of observations) $\mathbf C=\mathbf X^\top \mathbf X=\mathbf V\mathbf S^2\mathbf V^\top$ and so they are axes maximizing the variance (i.e. principal axes). But by the Eckart-Young theorem, first $k$ PCs provide the best rank-$k$ approximation to $\mathbf X$: $\mathbf X_k=\mathbf U_k\mathbf S_k \mathbf V^\top_k$ (this notation means taking only $k$ largest singular values/vectors) minimizes $\|\mathbf X-\mathbf X_k\|^2$. The first $k$ PCs provide not only the best rank-$k$ approximation to $\mathbf X$, but also to the covariance matrix $\mathbf C$. Indeed, $\mathbf C=\mathbf X^\top \mathbf X=\mathbf V\mathbf S^2\mathbf V^\top$, and the last equation provides the SVD decomposition of $\mathbf C$ (because $\mathbf V$ is orthogonal and $\mathbf S^2$ is diagonal). So the Eckert-Young theorem tells us that the best rank-$k$ approximation to $\mathbf C$ is given by $\mathbf C_k = \mathbf V_k\mathbf S_k^2\mathbf V_k^\top$. This can be transformed by noticing that $\mathbf W = \mathbf V\mathbf S$ are PCA loadings, and so $$\mathbf C_k=\mathbf V_k\mathbf S_k^2\mathbf V^\top_k=(\mathbf V\mathbf S)_k(\mathbf V\mathbf S)_k^\top=\mathbf W_k\mathbf W^\top_k.$$ The bottom-line here is that
$$ \mathrm{minimizing} \;
\left\{\begin{array}{ll}
\|\mathbf C-\mathbf W\mathbf W^\top\|^2 \\ \|\mathbf C-\mathbf W\mathbf W^\top-\sigma^2\mathbf I\|^2 \\ \|\mathbf C-\mathbf W\mathbf W^\top-\boldsymbol\Psi\|^2\end{array}\right\} \;
\mathrm{leads \: to} \;
\left\{\begin{array}{cc} \mathrm{PCA}\\ \mathrm{PPCA} \\ \mathrm{FA} \end{array}\right\} \;
\mathrm{loadings},$$
as stated in the beginning. Update 3: numerical demonstration that PCA$\to$FA when $n \to \infty$ I was encouraged by @ttnphns to provide a numerical demonstration of my claim that as dimensionality grows, PCA solution approaches FA solution. Here it goes. I generated a $200\times 200$ random correlation matrix with some strong off-diagonal correlations. I then took the upper-left $n \times n$ square block $\mathbf C$ of this matrix with $n=25, 50, \dots 200$ variables to investigate the effect of the dimensionality. For each $n$, I performed PCA and FA with number of components/factors $k=1\dots 5$, and for each $k$ I computed the off-diagonal reconstruction error $$\sum_{i\ne j}\left[\mathbf C - \mathbf W \mathbf W^\top\right]^2_{ij}$$ (note that on the diagonal, FA reconstructs $\mathbf C$ perfectly, due to the $\boldsymbol \Psi$ term, whereas PCA does not; but the diagonal is ignored here). Then for each $n$ and $k$, I computed the ratio of the PCA off-diagonal error to the FA off-diagonal error. This ratio has to be above $1$, because FA provides the best possible reconstruction. On the right, different lines correspond to different values of $k$, and $n$ is shown on the horizontal axis. Note that as $n$ grows, ratios (for all $k$) approach $1$, meaning that PCA and FA yield approximately the same loadings, PCA$\approx$FA. With relatively small $n$, e.g. when $n=25$, PCA performs [expectedly] worse, but the difference is not that strong for small $k$, and even for $k=5$ the ratio is below $1.2$. The ratio can become large when the number of factors $k$ becomes comparable with the number of variables $n$. In the example I gave above with $n=2$ and $k=1$, FA achieves $0$ reconstruction error, whereas PCA does not, i.e. the ratio would be infinite. But getting back to the original question, when $n=21$ and $k=3$, PCA will only moderately lose to FA in explaining the off-diagonal part of $\mathbf C$. For an illustrated example of PCA and FA applied to a real dataset (wine dataset with $n=13$), see my answers here: What are the differences between Factor Analysis and Principal Component Analysis? PCA and exploratory Factor Analysis on the same data set | {
"source": [
"https://stats.stackexchange.com/questions/123063",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60212/"
]
} |
123,124 | Let say we have a SVM classifier, how do we generate ROC curve? (Like theoretically) (because we are generate TPR and FPR with each of the threshold). And how do we determine the optimal threshold for this SVM classifier? | Use the SVM classifier to classify a set of annotated examples, and "one point" on the ROC space based on one prediction of the examples can be identified. Suppose the number of examples is 200, first count the number of examples of the four cases. \begin{array} {|r|r|r|}
\hline
& \text{labeled true} & \text{labeled false} \\
\hline
\text{predicted true} &71& 28\\
\hline
\text{predicted false} &57&44 \\
\hline
\end{array} Then compute TPR (True Positive Rate) and FPR (False Positive Rate). $TPR = 71/ (71+57)=0.5547$ , and $FPR=28/(28+44) = 0.3889$ On the ROC space, the x-axis is FPR, and the y-axis is TPR. So point $(0.3889, 0.5547)$ is obtained. To draw an ROC curve, just Adjust some threshold value that control the number of examples labelled true or false For example, if concentration of certain protein above α% signifies a disease, different values of α yield different final TPR and FPR values. The threshold values can be simply determined in a way similar to grid search; label training examples with different threshold values, train classifiers with different sets of labelled examples, run the classifier on the test data, compute FPR values, and select the threshold values that cover low (close to 0) and high (close to 1) FPR values, i.e., close to 0, 0.05, 0.1, ..., 0.95, 1 Generate many sets of annotated examples Run the classifier on the sets of examples Compute a (FPR, TPR) point for each of them Draw the final ROC curve Some details can be checked in http://en.wikipedia.org/wiki/Receiver_operating_characteristic . Besides, these two links are useful about how to determine an optimal threshold. A simple method is to take the one with maximal sum of true positive and false negative rates. Other finer criteria may include other variables involving different thresholds like financial costs, etc. http://www.medicalbiostatistics.com/roccurve.pdf http://www.kovcomp.co.uk/support/XL-Tut/life-ROC-curves-receiver-operating-characteristic.html | {
"source": [
"https://stats.stackexchange.com/questions/123124",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
123,144 | A colleague says that estimating the following time series model is statistically sound: $$y_t = \beta_0 + \beta_1 x_{1t} + \beta_2 x_{2t} + e_t$$ where $y_t$ is nonstationary $I(1)$, $x_{1t}$ is nonstationary but cointegrated with $y_t$, $x_{2t}$ is stationary, $e_t$ is a white noise residual and $\beta{_*}$ are parameters. I'm not so sure. The safe approach would just be to fit an Error Correction Model to these variables, but in this case there is resistance to doing that (long story). My intuition is that because the dependent variable is nonstationary and because $x_{2t}$ is stationary, that the covariance$(y_t,x_{2t})$ will be undefined and $\beta_2$ subject to change as the data set grows. So an ECM is the straightforward/classic way to model these series, but is the equation above legitimate? | Use the SVM classifier to classify a set of annotated examples, and "one point" on the ROC space based on one prediction of the examples can be identified. Suppose the number of examples is 200, first count the number of examples of the four cases. \begin{array} {|r|r|r|}
\hline
& \text{labeled true} & \text{labeled false} \\
\hline
\text{predicted true} &71& 28\\
\hline
\text{predicted false} &57&44 \\
\hline
\end{array} Then compute TPR (True Positive Rate) and FPR (False Positive Rate). $TPR = 71/ (71+57)=0.5547$ , and $FPR=28/(28+44) = 0.3889$ On the ROC space, the x-axis is FPR, and the y-axis is TPR. So point $(0.3889, 0.5547)$ is obtained. To draw an ROC curve, just Adjust some threshold value that control the number of examples labelled true or false For example, if concentration of certain protein above α% signifies a disease, different values of α yield different final TPR and FPR values. The threshold values can be simply determined in a way similar to grid search; label training examples with different threshold values, train classifiers with different sets of labelled examples, run the classifier on the test data, compute FPR values, and select the threshold values that cover low (close to 0) and high (close to 1) FPR values, i.e., close to 0, 0.05, 0.1, ..., 0.95, 1 Generate many sets of annotated examples Run the classifier on the sets of examples Compute a (FPR, TPR) point for each of them Draw the final ROC curve Some details can be checked in http://en.wikipedia.org/wiki/Receiver_operating_characteristic . Besides, these two links are useful about how to determine an optimal threshold. A simple method is to take the one with maximal sum of true positive and false negative rates. Other finer criteria may include other variables involving different thresholds like financial costs, etc. http://www.medicalbiostatistics.com/roccurve.pdf http://www.kovcomp.co.uk/support/XL-Tut/life-ROC-curves-receiver-operating-characteristic.html | {
"source": [
"https://stats.stackexchange.com/questions/123144",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60243/"
]
} |
123,320 | In showing that MSE can be decomposed into variance plus the square of Bias, the proof in Wikipedia has a step, highlighted in the picture. How does this work? How is the expectation pushed in to the product from the 3rd step to the 4th step? If the two terms are independent, shouldn't the expectation be applied to both the terms? and if they aren't, is this step valid? | The trick is that $\mathbb{E}(\hat{\theta}) - \theta$ is a constant. | {
"source": [
"https://stats.stackexchange.com/questions/123320",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55780/"
]
} |
123,609 | I am trying to solve the following question: Player A won 17 out of 25 games while player B won 8 out of 20 - is
there a significant difference between both ratios? The thing to do in R that comes to mind is the following: > prop.test(c(17,8),c(25,20),correct=FALSE)
2-sample test for equality of proportions without continuity correction
data: c(17, 8) out of c(25, 20)
X-squared = 3.528, df = 1, p-value = 0.06034
alternative hypothesis: two.sided
95 percent confidence interval:
-0.002016956 0.562016956
sample estimates:
prop 1 prop 2
0.68 0.40 So this test says that the difference is not significant at the 95% confidence level. Because we know that prop.test() is only using an approximation I want to make things more exact by using an exact binomial test - and I do it both ways around: > binom.test(x=17,n=25,p=8/20)
Exact binomial test
data: 17 and 25
number of successes = 17, number of trials = 25, p-value = 0.006693
alternative hypothesis: true probability of success is not equal to 0.4
95 percent confidence interval:
0.4649993 0.8505046
sample estimates:
probability of success
0.68
> binom.test(x=8,n=20,p=17/25)
Exact binomial test
data: 8 and 20
number of successes = 8, number of trials = 20, p-value = 0.01377
alternative hypothesis: true probability of success is not equal to 0.68
95 percent confidence interval:
0.1911901 0.6394574
sample estimates:
probability of success
0.4 Now this is strange, isn't it? The p-values are totally different each time! In both cases now the results are (highly) significant but the p-values seem to jump around rather haphazardly. My questions Why are the p-values that different each time? How to perform an exact two sample proportions binomial test in R correctly? | If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test . In R it is applied like so: > fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2))
Fisher's Exact Test for Count Data
data: matrix(c(17, 25 - 17, 8, 20 - 8), ncol = 2)
p-value = 0.07671
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.7990888 13.0020065
sample estimates:
odds ratio
3.101466 The fisher.test function accepts a matrix object of the 'successes' and 'failures' the two binomial proportions. As you can see, however, the two-sided hypothesis is still not significant, sorry to say. However, Fisher's Exact test is typically only applied when a cell count is low (typically this means 5 or less but some say 10), therefore your initial use of prop.test is more appropriate. Regarding your binom.test calls, you are misunderstanding the call. When you run binom.test(x=17,n=25,p=8/20) you are testing whether proportion is significantly different from a population where the probability of success is 8/20 . Likewise with binom.test(x=8,n=20,p=17/25) says the probability of success is 17/25 which is why these p-values differ. Therefore you are not comparing the two proportions at all. | {
"source": [
"https://stats.stackexchange.com/questions/123609",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/230/"
]
} |
124,365 | Mean absolute scaled error (MASE) is a measure of forecast accuracy proposed by Koehler & Hyndman (2006) . $$MASE=\frac{MAE}{MAE_{in-sample, \, naive}}$$ where $MAE$ is the mean absolute error produced by the actual forecast; while $MAE_{in-sample, \, naive}$ is the mean absolute error produced by a naive forecast (e.g. no-change forecast for an integrated $I(1)$ time series), calculated on the in-sample data. (Check out the Koehler & Hyndman (2006) paper for a precise definition and formula.) $MASE>1$ implies that the actual forecast does worse out of sample than a naive forecast did in sample, in terms of mean absolute error. Thus if mean absolute error is the relevant measure of forecast accuracy (which depends on the problem at hand), $MASE>1$ suggests that the actual forecast should be discarded in favour of a naive forecast if we expect the out-of-sample data to be quite like the in-sample data (because we only know how well a naive forecast performed in sample, not out of sample). Question: $MASE=1.38$ was used as a benchmark in a forecasting competition proposed in this Hyndsight blog post . Shouldn't an obvious benchmark have been $MASE=1$? Of course, this question is not specific to the particular forecasting competition. I would like some help on understanding this in a more general context. My guess: The only sensible explanation I see is that a naive forecast was expected to do quite worse out of sample than it did in sample, e.g. due to a structural change. Then $MASE<1$ might have been too challenging to achieve. References: Hyndman, Rob J., and Anne B. Koehler. " Another look at measures of forecast accuracy. " International journal of forecasting 22.4 (2006): 679-688. Hyndsight blog post . | In the linked blog post , Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article , an ungated version of which is linked to in the blog post. The benchmarks you refer to - 1.38 for monthly, 1.43 for quarterly and 2.28 for yearly data - were apparently arrived at as follows. The authors (all of them are expert forecasters and very active in the IIF - no snake oil salesmen here) are quite capable of applying standard forecasting algorithms or forecasting software, and they are probably not interested in simple ARIMA submission. So they went and applied some standard methods to their data. For the winning submission to be invited for a paper in the IJF , they ask that it improve on the best of these standard methods, as measured by the MASE. So your question essentially boils down to: Given that a MASE of 1 corresponds to a forecast that is out-of-sample as good (by MAD) as the naive random walk forecast in-sample, why can't standard forecasting methods like ARIMA improve on 1.38 for monthly data? Here, the 1.38 MASE comes from Table 4 in the ungated version. It is the average ASE over 1-24 month ahead forecasts from ARIMA. The other standard methods, like ForecastPro, ETS etc. perform even worse. And here, the answer gets hard . It is always very problematic to judge forecast accuracy without considering the data. One possibility I could think of in this particular case could be accelerating trends. Suppose that you try to forecast $\exp(t)$ with standard methods. None of these will capture the accelerating trend (and this is usually a Good Thing - if your forecasting algorithm often models an accelerating trend, you will likely far overshoot your mark), and they will yield a MASE that is above 1. Other explanations could, as you say, be different structural breaks, e.g., level shifts or external influences like SARS or 9/11, which would not be captured by the non-causal benchmark models, but which could be modeled by dedicated tourism forecasting methods (although using future causals in a holdout sample is a kind of cheating). So I'd say that you likely can't say a lot about this withough looking at the data themselves. They are available on Kaggle. Your best bet is likely to take these 518 series, hold out the last 24 months, fit ARIMA series, calculate MASEs, dig out the ten or twenty MASE-worst forecast series, get a big pot of coffee, look at these series and try to figure out what it is that makes ARIMA models so bad at forecasting them. EDIT: another point that appears obvious after the fact but took me five days to see - remember that the denominator of the MASE is the one-step ahead in-sample random walk forecast, whereas the numerator is the average of the 1-24-step ahead forecasts. It's not too surprising that forecasts deteriorate with increasing horizons, so this may be another reason for a MASE of 1.38. Note that the Seasonal Naive forecast was also included in the benchmark and had an even higher MASE. | {
"source": [
"https://stats.stackexchange.com/questions/124365",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/53690/"
]
} |
124,534 | I am trying to understand the differences between the linear dimensionality reduction methods (e.g., PCA) and the nonlinear ones (e.g., Isomap). I cannot quite understand what the (non)linearity implies in this context. I read from Wikipedia that By comparison, if PCA (a linear dimensionality reduction algorithm) is
used to reduce this same dataset into two dimensions, the resulting
values are not so well organized. This demonstrates that the
high-dimensional vectors (each representing a letter 'A') that sample
this manifold vary in a non-linear manner. What does the high-dimensional vectors (each representing a letter 'A') that
sample this manifold vary in a non-linear manner. mean? Or more broadly, how do I understand the (non)linearity in this context? | A picture is worth a thousand words: Here we are looking for 1-dimensional structure in 2D. The points lie along an S-shaped curve. PCA tries to describe the data with a linear 1-dimensional manifold, which is simply a line; of course a line fits these data quite bad. Isomap is looking for a nonlinear (i.e. curved!) 1-dimensional manifold, and should be able to discover the underlying S-shaped curve. | {
"source": [
"https://stats.stackexchange.com/questions/124534",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26881/"
]
} |
124,628 | I've been reading a bit on boosting algorithms for classification tasks and Adaboost in particular. I understand that the purpose of Adaboost is to take several "weak learners" and, through a set of iterations on training data, push classifiers to learn to predict classes that the model(s) repeatedly make mistakes on. However, I was wondering why so many of the readings I've done have used decision trees as the weak classifier. Is there a particular reason for this? Are there certain classifiers that make particularly good or bad candidates for Adaboost? | I talked about this in an answer to a related SO question . Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this: Decision trees are non-linear. Boosting with linear models simply doesn't work well. The weak learner needs to be consistently better than random guessing. You don't normal need to do any parameter tuning to a decision tree to get that behavior. Training an SVM really does need a parameter search. Since the data is re-weighted on each iteration, you likely need to do another parameter search on each iteration. So you are increasing the amount of work you have to do by a large margin. Decision trees are reasonably fast to train. Since we are going to be building 100s or 1000s of them, thats a good property. They are also fast to classify, which is again important when you need 100s or 1000s to run before you can output your decision. By changing the depth you have a simple and easy control over the bias/variance trade off, knowing that boosting can reduce bias but also significantly reduces variance. Boosting is known to overfit, so the easy nob to tune is helpful in that regard. | {
"source": [
"https://stats.stackexchange.com/questions/124628",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/34993/"
]
} |
124,818 | On whether an error term exists in logistic regression (and its assumed distribution), I have read in various places that: no error term exists the error term has a binomial distribution (in accordance with the distribution of the response variable) the error term has a logistic distribution Can someone please clarify? | In linear regression observations are assumed to follow a Gaussian distribution with a mean parameter conditional on the predictor values. If you subtract the mean from the observations you get the error : a Gaussian distribution with mean zero, & independent of predictor values—that is errors at any set of predictor values follow the same distribution. In logistic regression observations $y\in\{0,1\}$ are assumed to follow a Bernoulli distribution † with a mean parameter (a probability) conditional on the predictor values. So for any given predictor values determining a mean $\pi$ there are only two possible errors: $1-\pi$ occurring with probability $\pi$, & $0-\pi$ occurring with probability $1-\pi$. For other predictor values the errors will be $1-\pi'$ occurring with probability $\pi'$, & $0-\pi'$ occurring with probability $1-\pi'$. So there's no common error distribution independent of predictor values, which is why people say "no error term exists" (1). "The error term has a binomial distribution" (2) is just sloppiness—"Gaussian models have Gaussian errors, ergo binomial models have binomial errors". (Or, as @whuber points out, it could be taken to mean "the difference between an observation and its expectation has a binomial distribution translated by the expectation".) "The error term has a logistic distribution" (3) arises from the derivation of logistic regression from the model where you observe whether or not a latent variable with errors following a logistic distribution exceeds some threshold. So it's not the same error defined above. (It would seem an odd thing to say IMO outside that context, or without explicit reference to the latent variable.) † If you have $k$ observations with the same predictor values, giving the same probability $\pi$ for each, then their sum $\sum y$ follows a binomial distribution with probability $\pi$ and no. trials $k$. Considering $\sum y -k\pi$ as the error leads to the same conclusions. | {
"source": [
"https://stats.stackexchange.com/questions/124818",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61124/"
]
} |
125,084 | For a unimodal distribution, if mean = median then is it sufficient to say that distribution is symmetric? Wikipedia says in relationship between mean and median: "If the distribution is symmetric then the mean is equal to the median
and the distribution will have zero skewness. If, in addition, the
distribution is unimodal, then the mean = median = mode. This is the
case of a coin toss or the series 1,2,3,4,... Note, however, that the
converse is not true in general, i.e. zero skewness does not imply
that the mean is equal to the median." However, it is not very straight forward (to me) to glean the information I need. Any help please. | Here is a small counterexample that is not symmetric: -3, -2, 0, 0, 1, 4 is unimodal with mode = median = mean = 0. Edit: An even smaller example is -2, -1, 0, 0, 3. If you want to imagine a random variable rather than a sample, take the support as {-2, -1, 0, 3} with probability mass function 0.2 on all of them except for 0 where it is 0.4. | {
"source": [
"https://stats.stackexchange.com/questions/125084",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36948/"
]
} |
125,561 | I have a model of Movies dataset and I used the regression: model <- lm(imdbVotes ~ imdbRating + tomatoRating + tomatoUserReviews+ I(genre1 ** 3.0) +I(genre2 ** 2.0)+I(genre3 ** 1.0), data = movies)
library(ggplot2)
res <- qplot(fitted(model), resid(model))
res+geom_hline(yintercept=0) Which gave the output: Now I tried working something called Added Variable Plot first time and I got the following output: car::avPlots(model, id.n=2, id.cex=0.7) The problem is I tried to understand Added Variable Plot using google but I couldn't understand its depth, seeing the plot I understood that its kind of representation of skewing based on each of the input variable related to the output. Can I get bit more details like how its justifies the data normalization? | For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes $\beta_2$ and $\beta_3$ are both positive so we can say that (i) $Y$ increases as $X_2$ increases, if $X_3$ is held constant, since $\beta_2$ is positive; (ii) $Y$ increases as $X_3$ increases, if $X_2$ is held constant, since $\beta_3$ is positive. Note that it's important to interpret multiple regression coefficients by considering what happens when the other variables are held constant ("ceteris paribus"). Suppose I just regressed $Y$ against $X_2$ with a model $Y = \beta_1' + \beta_2' X_2 + \epsilon'$ . My estimate for the slope coefficient $\beta_2'$ , which measures the effect on $Y$ of a one unit increase in $X_2$ without holding $X_3$ constant, may be different from my estimate of $\beta_2$ from the multiple regression - that also measures the effect on $Y$ of a one unit increase in $X_2$ , but it does hold $X_3$ constant. The problem with my estimate $\hat{\beta_2'}$ is that it suffers from omitted-variable bias if $X_2$ and $X_3$ are correlated. To understand why, imagine $X_2$ and $X_3$ are negatively correlated. Now when I increase $X_2$ by one unit, I know the mean value of $Y$ should increase since $\beta_2 > 0$ . But as $X_2$ increases, if we don't hold $X_3$ constant then $X_3$ tends to decrease, and since $\beta_3 > 0$ this will tend to reduce the mean value of $Y$ . So the overall effect of a one unit increase in $X_2$ will appear lower if I allow $X_3$ to vary also, hence $\beta_2' < \beta_2$ . Things get worse the more strongly $X_2$ and $X_3$ are correlated, and the larger the effect of $X_3$ through $\beta_3$ - in a really severe case we may even find $\beta_2' < 0$ even though we know that, ceteris paribus, $X_2$ has a positive influence on $Y$ ! Hopefully you can now see why drawing a graph of $Y$ against $X_2$ would be a poor way to visualise the relationship between $Y$ and $X_2$ in your model. In my example, your eye would be drawn to a line of best fit with slope $\hat{\beta_2'}$ that doesn't reflect the $\hat{\beta_2}$ from your regression model. In the worst case, your model may predict that $Y$ increases as $X_2$ increases (with other variables held constant) and yet the points on the graph suggest $Y$ decreases as $X_2$ increases. The problem is that in the simple graph of $Y$ against $X_2$ , the other variables aren't held constant. This is the crucial insight into the benefit of an added variable plot (also called a partial regression plot) - it uses the Frisch-Waugh-Lovell theorem to "partial out" the effect of other predictors. The horizonal and vertical axes on the plot are perhaps most easily understood* as " $X_2$ after other predictors are accounted for" and " $Y$ after other predictors are accounted for". You can now look at the relationship between $Y$ and $X_2$ once all other predictors have been accounted for . So for example, the slope you can see in each plot now reflects the partial regression coefficients from your original multiple regression model. A lot of the value of an added variable plot comes at the regression diagnostic stage, especially since the residuals in the added variable plot are precisely the residuals from the original multiple regression. This means outliers and heteroskedasticity can be identified in a similar way to when looking at the plot of a simple rather than multiple regression model. Influential points can also be seen - this is useful in multiple regression since some influential points are not obvious in the original data before you take the other variables into account. In my example, a moderately large $X_2$ value may not look out of place in the table of data, but if the $X_3$ value is large as well despite $X_2$ and $X_3$ being negatively correlated then the combination is rare. "Accounting for other predictors", that $X_2$ value is unusually large and will stick out more prominently on your added variable plot. $*$ More technically they would be the residuals from running two other multiple regressions: the residuals from regressing $Y$ against all predictors other than $X_2$ go on the vertical axis, while the residuals from regression $X_2$ against all other predictors go on the horizontal axis. This is really what the legends of " $Y$ given others" and " $X_2$ given others" are telling you. Since the mean residual from both of these regressions is zero, the mean point of ( $X_2$ given others, $Y$ given others) will just be (0, 0) which explains why the regression line in the added variable plot always goes through the origin. But I often find that mentioning the axes are just residuals from other regressions confuses people (unsurprising perhaps since we now are talking about four different regressions!) so I have tried not to dwell on the matter. Comprehend them as " $X_2$ given others" and " $Y$ given others" and you should be fine. | {
"source": [
"https://stats.stackexchange.com/questions/125561",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/45813/"
]
} |
126,048 | I applied logistic regression to my data on SAS and here are the ROC curve and classification table. I am comfortable with the figures in the classification table, but not exactly sure what the roc curve and the area under it show. Any explanation would be greatly appreciated. | When you do logistic regression, you are given two classes coded as $1$ and $0$. Now, you compute probabilities that given some explanatory varialbes an individual belongs to the class coded as $1$. If you now choose a probability threshold and classify all individuals with a probability greater than this threshold as class $1$ and below as $0$, you will in the most cases make some errors because usually two groups cannot be discriminated perfectly. For this threshold you can now compute your errors and the so-called sensitivity and specificity. If you do this for many thresholds, you can construct a ROC curve by plotting sensitivity against 1-Specificity for many possible thresholds. The area under the curve comes in play if you want to compare different methods that try to discriminate between two classes, e. g. discriminant analysis or a probit model. You can construct the ROC curve for all these models and the one with the highest area under the curve can be seen as the best model. If you need to get a deeper understanding, you can also read the answer of a different question regarding ROC curves by clicking here. | {
"source": [
"https://stats.stackexchange.com/questions/126048",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/18549/"
]
} |
126,238 | The state of the art of non-linearity is to use rectified linear units (ReLU) instead of sigmoid function in deep neural network. What are the advantages? I know that training a network when ReLU is used would be faster, and it is more biological inspired, what are the other advantages? (That is, any disadvantages of using sigmoid)? | Two additional major benefits of ReLUs are sparsity and a reduced likelihood of vanishing gradient. But first recall the definition of a ReLU is $h = \max(0, a)$ where $a = Wx + b$. One major benefit is the reduced likelihood of the gradient to vanish. This arises when $a > 0$. In this regime the gradient has a constant value. In contrast, the gradient of sigmoids becomes increasingly small as the absolute value of x increases. The constant gradient of ReLUs results in faster learning. The other benefit of ReLUs is sparsity. Sparsity arises when $a \le 0$. The more such units that exist in a layer the more sparse the resulting representation. Sigmoids on the other hand are always likely to generate some non-zero value resulting in dense representations. Sparse representations seem to be more beneficial than dense representations. | {
"source": [
"https://stats.stackexchange.com/questions/126238",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41749/"
]
} |
126,346 | What is meant by the statement that the kurtosis of a normal distribution is 3. Does it mean that on the horizontal line, the value of 3 corresponds to the peak probability, i.e. 3 is the mode of the system? When I look at a normal curve, it seems the peak occurs at the center, a.k.a at 0. So why is the kurtosis not 0 and instead 3? | Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized version of the variable we're looking at, then the population kurtosis is the average fourth power of that standardized variable; $E(Z^4)$. The sample kurtosis is correspondingly related to the mean fourth power of a standardized set of sample values (in some cases it is scaled by a factor that goes to 1 in large samples). As you note, this fourth standardized moment is 3 in the case of a normal random variable. As Alecos notes in comments, some people define kurtosis as $E(Z^4)-3$; that's sometimes called excess kurtosis (it's also the fourth cumulant). When seeing the word 'kurtosis' you need to keep in mind this possibility that different people use the same word to refer to two different (but closely related) quantities. Kurtosis is usually either described as peakedness* (say, how sharply curved the peak is - which was presumably the intent of choosing the word "kurtosis") or heavy-tailedness (often what people are interested in using it to measure), but in actual fact the usual fourth standardized moment doesn't quite measure either of those things. Indeed, the first volume of Kendall and Stuart give counterexamples that show that higher kurtosis is not necessarily associated with either higher peak (in a standardized variable) or fatter tails (in rather similar way that the third moment doesn't quite measure what many people think it does). However in many situations there's some tendency to be associated with both, in that greater peakedness and heavy tailedness often tend to be seen when kurtosis is higher -- we should simply beware thinking it is necessarily the case. Kurtosis and skewness are strongly related (the kurtosis must be at least 1 more than the square of the skewness; interpretation of kurtosis is somewhat easier when the distribution is nearly symmetric. Darlington (1970) and Moors (1986) showed that the fourth moment measure of kurtosis is in effect variability about "the shoulders" - $\mu\pm\sigma$, and Balanda and MacGillivray (1988) suggest thinking of it in vague terms related to that sense (and consider some other ways to measure it). If the distribution is closely concentrated about $\mu\pm\sigma$, then kurtosis is (necessarily) small, while if the distribution is spread out away from $\mu\pm\sigma$ (which will tend to simultaneously pile it up in the center and move probability into the tails in order to move it away from the shoulders), fourth-moment kurtosis will be large. De Carlo (1997) is a reasonable starting place (after more basic resources like Wikipedia) for reading about kurtosis. Edit: I see some occasional questioning of whether higher peakedness (values near 0) can affect kurtosis at all. The answer is yes, definitely it can. That this is the case is a consequence of it being the fourth moment of a standardized variable -- to increase the fourth moment of a standardized variate you must increase $E(Z^4)$ while holding $E(Z^2)$ constant . This means that movement of probability further into the tail must be accompanied by some further in (inside $(-1,1)$); and vice versa -- if you put more weight at the center while holding the variance at 1, you also put some out in the tail. [NB as discussed in comments this is incorrect as a general statement; a somewhat different statement is required here.] This effect of variance being held constant is directly connected to the discussion of kurtosis as "variation about the shoulders" in Darlington and Moors' papers. That result is not some handwavy-notion, but a plain mathematical equivalence - one cannot hold it to be otherwise without misrepresenting kurtosis. Now it's possible to increase the probability inside $(-1,1)$ without lifting the peak. Equally, it's possible to increase the probability outside $(-1,1)$ without necessarily making the distant tail heavier (by some typical tail-index, say). That is, it's quite possible to raise kurtosis while making the tail lighter (e.g. having a lighter tail beyond 2sds either side of the mean, say). [My inclusion of Kendall and Stuart in the references is because their discussion of kurtosis also relevant to this point.] So what can we say? Kurtosis is often associated with a higher peak and with a heavier tail, without having to occur wither either. Certainly it's easier to lift kurtosis by playing with the tail (since it's possible to get more than 1 sd away) then adjusting the center to keep variance constant, but that doesn't mean that the peak has no impact; it assuredly does, and one can manipulate kurtosis by focusing on it instead. Kurtosis is largely but not only associated with tail heaviness -- again, look to the variation about the shoulders result; if anything that's what kurtosis is looking at, in an unavoidable mathematical sense. References Balanda, K.P. and MacGillivray, H.L. (1988), "Kurtosis: A critical review." American Statistician 42 , 111-119. Darlington, Richard B. (1970), "Is Kurtosis Really "Peakedness?"." American Statistician 24 , 19-22. Moors, J.J.A. (1986), "The meaning of kurtosis: Darlington reexamined." American Statistician 40 , 283-284. DeCarlo, L.T. (1997), "On the meaning and use of kurtosis." Psychol. Methods, 2 , 292-307. Kendall, M. G., and A. Stuart, The Advanced Theory of Statistics , Vol. 1, 3rd Ed. (more recent editions have Stuart and Ord) | {
"source": [
"https://stats.stackexchange.com/questions/126346",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61158/"
]
} |
126,591 | Is there an official name for this extremely simple plot, in which vertical lines indicate the distribution of some samples in a range? | The first example I have seen them referenced in are Strips displaying empirical distributions: I. textured dot strips (Tukey and Tukey, 1990) although I have never been able to actually get that technical report. Tim is right: they are often accompanied as the rug on an additional plot to show the location of individual observations, but rug plot is a bit more general and that type of plot is not always on the rug of another plot as your question shows! Here is an example of using points on the rug instead of lines. Here is an example of the rug being points and not displaying all of the data, but only data missing in the other dimension of a scatterplot. So a rug plot is not always a set of lines on the borders of another graph, and that type of plot in your question is not always on the margins of another plot. Here is an example of the lines superimposed on a kernel density instead of on the rug of the plot, called a beanplot . The larger lines I believe are used to visualize different quantiles (a.k.a. letter values) of the distribution. (source: biomedcentral.com ) In Wilkinson's Grammar of Graphics it may be considered a one-dimensional scatterplot but using line segments instead of the typical default of circles. The point of this is to prevent many of the nearby points from being superimposed. If you have many points and draw them semi-transparently they eventually turn into a density strip, see the final picture in this post . I've even seen them suggested to use as sparklines ( Greenhill et al., 2011 ) in that example to visualize binary data. Greenhill calls them in that example separation plots , and here is an example taken from the referenced paper (p.995): So in that example there are values along the entire axis, and color is used to visualize a binary variable. The black line in that plot is the cumulative proportion of red observations. | {
"source": [
"https://stats.stackexchange.com/questions/126591",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14803/"
]
} |
126,791 | A stochastic process is a process that evolves over time, so is it really a fancier way of saying "time series"? | Because many troubling discrepancies are showing up in comments and answers, let's refer to some authorities. James Hamilton does not even define a time series, but he is clear about what one is: ... this set of $T$ numbers is only one possible outcome of the underlying stochastic process that generated the data. Indeed, even if we were to imagine having observed the process for an infinite period of time, arriving at the sequence $$\{y_t\}_{t=\infty}^\infty = \{\ldots, y_{-1}, y_0, y_1, y_2, \ldots, y_T, y_{T+1}, y_{T+2}, \ldots, \},$$ the infinite sequence $\{y_t\}_{t=\infty}^\infty$ would still be viewed as a single realization from a time series process. ... Imagine a battery of $I$ ... computers generating sequences $\{y_t^{(1)}\}_{t=-\infty}^{\infty},$ $\{y_t^{(2)}\}_{t=-\infty}^{\infty}, \ldots,$ $ \{y_t^{(I)}\}_{t=-\infty}^{\infty}$ , and consider selecting the observation associated with date $t$ from each sequence: $$\{y_t^{(1)}, y_t^{(2)}, \ldots, y_t^{(I)}\}.$$ This would be described as a sample of $I$ realizations of the random variable $Y_t$ . ... ( Time Series Analysis , Chapter 3.) Thus, a "time series process" is a set of random variables $\{Y_t\}$ indexed by integers $t$ . In Stochastic Differential Equations, Bernt Øksendal provides a standard mathematical definition of a general stochastic process: Definition 2.1.4. A stochastic process is a parametrized collection of random variables $$\{X_t\}_{t\in T}$$ defined on a probability space $(\Omega, \mathcal{F}, \mathcal{P})$ and assuming values in $\mathbb{R}^n$ . The parameter space $T$ is usually (as in this book) the halfline $[0,\infty)$ , but it may also be an interval $[a,b]$ , the non-negative integers, and even subsets of $\mathbb{R}^n$ for $n\ge 1$ . Putting the two together, we see that a time series process is a stochastic process indexed by integers. Some people use "time series" to refer to a realization of a time series process (as in the Wikipedia article ). We can see in Hamilton's language a reasonable effort to distinguish the process from the realization by his use of "time series process," so that he can use "time series" to refer to realizations (or even data). | {
"source": [
"https://stats.stackexchange.com/questions/126791",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61158/"
]
} |
126,904 | Is there a simulation method that is not Monte Carlo? All simulation methods involve substituting random numbers into the function to find a range of values for the function. So are all simulation methods in essence Monte Carlo methods? | There are simulations that are not Monte Carlo. Basically, all Monte Carlo methods use the (weak) law of large numbers: The mean converges to its expectation. Then there are Quasi Monte Carlo methods. These are simulated with a compromise of random numbers and equally spaced grids to yield faster convergece. Simulations that are not Monte Carlo are e.g. used in computational fluid dynamics. It is easy to model fluid dynamics on a "micro scale" of single portions of the fluid. These portions have an initial speed, pressure and size and are affected by forces from the neighbouring portions or by solid bodies. Simulations compute the whole behaviour of the fluid by calculating all the portions and their interaction. Doing this efficiently makes this a science. No random numbers are needed there. In meteorology or climate research, things are done similarly. But now, the initial values are not exactly known: You only have the meteorological data at some points where they have been measured. A lot of data has to be guessed. As these complicated problems are often not continuous in their input data, you run the simulations with different guesses. The final result will be chosen among the most frequent outcomes. This is actually how some weather forecasts are simulated in principle. | {
"source": [
"https://stats.stackexchange.com/questions/126904",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61158/"
]
} |
127,042 | Since Logistic Regression is a statistical classification model dealing with categorical dependent variables, why isn't it called Logistic Classification ? Shouldn't the "Regression" name be reserved to models dealing with continuous dependent variables? | Logistic regression is emphatically not a classification algorithm on its own. It is only a classification algorithm in combination with a decision rule that makes dichotomous the predicted probabilities of the outcome. Logistic regression is a regression model because it estimates the probability of class membership as a (transformation of a) multilinear function of the features. Frank Harrell has posted a number of answers on this website enumerating the pitfalls of regarding logistic regression as a classification algorithm. Among them: Classification is a decision . To make an optimal decision, you need to asses a utility function, which implies that you need to account for the uncertainty in the outcome, i.e. a probability. The costs of misclassification are not uniform across all units. Don't use cutoffs. Use proper scoring rules. The problem is actually risk estimation, not classification. If I recall correctly, he once pointed me to his book on regression strategies for more elaboration on these (and more!) points, but I can't seem to find that particular post. | {
"source": [
"https://stats.stackexchange.com/questions/127042",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62245/"
]
} |
127,598 | Why do you square things in stats? I have run across this a lot, in both data mining and statistics classes, but no one has ever been able to give me an answer. One specific example is when summing the deviation scores in statistics you have to square them (otherwise the sum is 0). Why do you square them rather then using something else, like absolute value. Difference between prior question: If you have an answer for the problem above, does your answer apply to most statistics stuff that does this? If not, why not. | $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making predictions from your model. For instance, if you buy a stock expecting its future price to be $P_{\predicted}$ and its future price is $P_{\actual}$ instead, you lose money proportional to $(P_{\predicted} - P_{\actual})$ , not its square! The same is true in many other contexts. So why squared error? The squared error has many nice mathematical properties. Echoing the other answerers here, I would say that many of them are merely "convenient"--we might choose to use the absolute error instead if it didn't pose technical issues when solving problems. For instance: If $X$ is a random variable, then the estimator of $X$ that minimizes the squared error is the mean, $E(X)$ . On the other hand, the estimator that minimizes the absolute error is the median, $m(X)$ . The mean has much nicer properties than the median; for instance, $E(X + Y) = E(X) + E(Y)$ , but there is no general expression for $m(X + Y)$ . If you have a vector $\vec X = (X_1, X_2)$ estimated by $\vec x = x_1, x_2$ , then for the squared error it doesn't matter whether you consider the components separately or together: $||\vec X - \vec x||^2 = (X_1 - x_1)^2 + (X_2 - x_2)^2$ , so the squared error of the components just adds. You can't do that with absolute error. This means that the squared error is independent of re-parameterizations : for instance, if you define $\vec Y_1 = (X_1 + X_2, X_1 - X_2)$ , then the minimum-squared-deviance estimators for $Y$ and $X$ are the same, but the minimum-absolute-deviance estimators are not. For independent random variables, variances (expected squared errors) add: $\Var(X + Y) = \Var(X) + \Var(Y)$ . The same is not true for expected absolute error. For a sample from a multivariate Gaussian distribution (where probability density is exponential in the squared distance from the mean), all of its coordinates are Gaussian, no matter what coordinate system you use. For a multivariate Laplace distribution (like a Gaussian but with absolute, not squared, distance), this isn't true. The squared error of a probabilistic classifier is a proper scoring rule . If you had an oracle telling you the actual probability of each class for each item, and you were being scored based on your Brier score, your best bet would be to predict what the oracle told you for each class. This is not true for absolute error. (For instance, if the oracle tells you that $P(Y=1) = 0.9$ , then predicting that $P(Y=1) = 0.9$ yields an expected score of $0.9\cdot 0.1 + 0.1 \cdot 0.9 = 0.18$ ; you should instead predict that $P(Y=1) = 1$ , for an expected score of $0.9\cdot 0 + 0.1 \cdot 1 = 0.1$ .) Some mathematical coincidences or conveniences involving the squared error are more important, though. They don't pose technical problem-solving issues; rather, they give us intrinsic reasons why minimizing the square error might be a good idea: When fitting a Gaussian distribution to a set of data, the maximum-likelihood fit is that which minimizes the squared error, not the absolute error. When doing dimensionality reduction, finding the basis that minimizes the squared reconstruction error yields principal component analysis , which is nice to compute, coordinate-independent, and has a natural interpretation for multivariate Gaussian distributions (finding the axes of the ellipse that the distribution makes). There's a variant called "robust PCA" that is sometimes applied to minimizing absolute reconstruction error, but it seems to be less well-studied and harder to understand and compute. Looking deeper One might well ask whether there is some deep mathematical truth underlying the many different conveniences of the squared error. As far as I know, there are a few (which are related in some sense, but not, I would say, the same): Differentiability The squared error is everywhere differentiable , while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization . To optimize the squared error, you can just set its derivative equal to 0 and solve; to optimize the absolute error often requires more complex techniques. Inner products The squared error is induced by an inner product on the underlying space. An inner product is basically a way of "projecting vector $x$ along vector $y$ ," or figuring out "how much does $x$ point in the same direction as $y$ ." In finite dimensions this is the standard (Euclidean) inner product $\langle a, b\rangle = \sum_i a_ib_i$ . Inner products are what allow us to think geometrically about a space, because they give a notion of: a right angle ( $x$ and $y$ are right angles if $\langle x, y\rangle = 0$ ); and a length (the length of $x$ is $||x|| = \sqrt{\langle x, x\rangle}$ ). By "the squared error is induced by the Euclidean inner product" I mean that the squared error between $x$ and $y$ is $||x-y||$ , the Euclidean distance between them. In fact the Euclidean inner product is in some sense the "only possible" axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric properties. For random variables, in fact, you can define is a similar inner product: $\langle X, Y\rangle = E(XY)$ . This means that we can think of a "geometry" of random variables, in which two variables make a "right angle" if $E(XY) = 0$ . Not coincidentally, the "length" of $X$ is $E(X^2)$ , which is related to its variance. In fact, in this framework, "independent variances add" is just a consequence of the Pythagorean Theorem: \begin{align}
\Var(X + Y) &= ||(X - \mu_X)\, + (Y - \mu_Y)||^2 \\
&= ||X - \mu_X||^2 + ||Y - \mu_Y||^2 \\
&= \Var(X)\quad\ \ \, + \Var(Y).
\end{align} Beyond squared error Given these nice mathematical properties, would we ever not want to use squared error? Well, as I mentioned at the very beginning, sometimes absolute error is closer to what we "care about" in practice. For instance, if your data has tails that are fatter than Gaussian, then minimizing the squared error can place too much weight on outlying points. The absolute error is less sensitive to such outliers. (For instance, if you observe an outlier in your sample, it changes the squared-error-minimizing mean proportionally to the magnitude of the outlier, but hardly changes the absolute-error-minimizing median at all!) And although the absolute error doesn't enjoy the same nice mathematical properties as the squared error, that just means absolute-error problems are harder to solve , not that they're objectively worse in some sense. The upshot is that as computational methods have advanced, we've become able to solve absolute-error problems numerically, leading to the rise of the subfield of robust statistical methods . In fact, there's a fairly nice correspondence between some squared-error and absolute-error methods: Squared error | Absolute error
========================|============================
Mean | Median
Variance | Expected absolute deviation
Gaussian distribution | Laplace distribution
Linear regression | Quantile regression
PCA | Robust PCA
Ridge regression | LASSO As we get better at modern numerical methods, no doubt we'll find other useful absolute-error-based techniques, and the gap between squared-error and absolute-error methods will narrow. But because of the connection between the squared error and the Gaussian distribution, I don't think it will ever go away entirely. | {
"source": [
"https://stats.stackexchange.com/questions/127598",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62559/"
]
} |
127,605 | Just need some hints on finding the distribution of $Z =\frac{min(X,Y)}{max(X,Y)}$ Where X and Y are iid ~ Unif(0,1). $P(Z \gt z) = P(\frac{min(X,Y)}{max(X,Y)} \gt z) = P(min(X,Y) \gt z*max(X,Y))$ $= P(X \gt Y, min(X,Y) \gt z*max(X,Y)) + P(Y \gt X, min(X,Y) \gt z*max(X,Y))$ $= P(X \gt Y \gt z*X) + P(Y \gt X \gt z*Y)$ $= \int^x_{zx} 1 dy + \int^y_{zy} 1 dx$ $= x - zx + y - zy = x(1-z) + y(1-z) = (1-z)(x+y)$ Not sure if this is correct, especially the integral part. If anyone could help me out it would be appreciated. Thanks | $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making predictions from your model. For instance, if you buy a stock expecting its future price to be $P_{\predicted}$ and its future price is $P_{\actual}$ instead, you lose money proportional to $(P_{\predicted} - P_{\actual})$ , not its square! The same is true in many other contexts. So why squared error? The squared error has many nice mathematical properties. Echoing the other answerers here, I would say that many of them are merely "convenient"--we might choose to use the absolute error instead if it didn't pose technical issues when solving problems. For instance: If $X$ is a random variable, then the estimator of $X$ that minimizes the squared error is the mean, $E(X)$ . On the other hand, the estimator that minimizes the absolute error is the median, $m(X)$ . The mean has much nicer properties than the median; for instance, $E(X + Y) = E(X) + E(Y)$ , but there is no general expression for $m(X + Y)$ . If you have a vector $\vec X = (X_1, X_2)$ estimated by $\vec x = x_1, x_2$ , then for the squared error it doesn't matter whether you consider the components separately or together: $||\vec X - \vec x||^2 = (X_1 - x_1)^2 + (X_2 - x_2)^2$ , so the squared error of the components just adds. You can't do that with absolute error. This means that the squared error is independent of re-parameterizations : for instance, if you define $\vec Y_1 = (X_1 + X_2, X_1 - X_2)$ , then the minimum-squared-deviance estimators for $Y$ and $X$ are the same, but the minimum-absolute-deviance estimators are not. For independent random variables, variances (expected squared errors) add: $\Var(X + Y) = \Var(X) + \Var(Y)$ . The same is not true for expected absolute error. For a sample from a multivariate Gaussian distribution (where probability density is exponential in the squared distance from the mean), all of its coordinates are Gaussian, no matter what coordinate system you use. For a multivariate Laplace distribution (like a Gaussian but with absolute, not squared, distance), this isn't true. The squared error of a probabilistic classifier is a proper scoring rule . If you had an oracle telling you the actual probability of each class for each item, and you were being scored based on your Brier score, your best bet would be to predict what the oracle told you for each class. This is not true for absolute error. (For instance, if the oracle tells you that $P(Y=1) = 0.9$ , then predicting that $P(Y=1) = 0.9$ yields an expected score of $0.9\cdot 0.1 + 0.1 \cdot 0.9 = 0.18$ ; you should instead predict that $P(Y=1) = 1$ , for an expected score of $0.9\cdot 0 + 0.1 \cdot 1 = 0.1$ .) Some mathematical coincidences or conveniences involving the squared error are more important, though. They don't pose technical problem-solving issues; rather, they give us intrinsic reasons why minimizing the square error might be a good idea: When fitting a Gaussian distribution to a set of data, the maximum-likelihood fit is that which minimizes the squared error, not the absolute error. When doing dimensionality reduction, finding the basis that minimizes the squared reconstruction error yields principal component analysis , which is nice to compute, coordinate-independent, and has a natural interpretation for multivariate Gaussian distributions (finding the axes of the ellipse that the distribution makes). There's a variant called "robust PCA" that is sometimes applied to minimizing absolute reconstruction error, but it seems to be less well-studied and harder to understand and compute. Looking deeper One might well ask whether there is some deep mathematical truth underlying the many different conveniences of the squared error. As far as I know, there are a few (which are related in some sense, but not, I would say, the same): Differentiability The squared error is everywhere differentiable , while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization . To optimize the squared error, you can just set its derivative equal to 0 and solve; to optimize the absolute error often requires more complex techniques. Inner products The squared error is induced by an inner product on the underlying space. An inner product is basically a way of "projecting vector $x$ along vector $y$ ," or figuring out "how much does $x$ point in the same direction as $y$ ." In finite dimensions this is the standard (Euclidean) inner product $\langle a, b\rangle = \sum_i a_ib_i$ . Inner products are what allow us to think geometrically about a space, because they give a notion of: a right angle ( $x$ and $y$ are right angles if $\langle x, y\rangle = 0$ ); and a length (the length of $x$ is $||x|| = \sqrt{\langle x, x\rangle}$ ). By "the squared error is induced by the Euclidean inner product" I mean that the squared error between $x$ and $y$ is $||x-y||$ , the Euclidean distance between them. In fact the Euclidean inner product is in some sense the "only possible" axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric properties. For random variables, in fact, you can define is a similar inner product: $\langle X, Y\rangle = E(XY)$ . This means that we can think of a "geometry" of random variables, in which two variables make a "right angle" if $E(XY) = 0$ . Not coincidentally, the "length" of $X$ is $E(X^2)$ , which is related to its variance. In fact, in this framework, "independent variances add" is just a consequence of the Pythagorean Theorem: \begin{align}
\Var(X + Y) &= ||(X - \mu_X)\, + (Y - \mu_Y)||^2 \\
&= ||X - \mu_X||^2 + ||Y - \mu_Y||^2 \\
&= \Var(X)\quad\ \ \, + \Var(Y).
\end{align} Beyond squared error Given these nice mathematical properties, would we ever not want to use squared error? Well, as I mentioned at the very beginning, sometimes absolute error is closer to what we "care about" in practice. For instance, if your data has tails that are fatter than Gaussian, then minimizing the squared error can place too much weight on outlying points. The absolute error is less sensitive to such outliers. (For instance, if you observe an outlier in your sample, it changes the squared-error-minimizing mean proportionally to the magnitude of the outlier, but hardly changes the absolute-error-minimizing median at all!) And although the absolute error doesn't enjoy the same nice mathematical properties as the squared error, that just means absolute-error problems are harder to solve , not that they're objectively worse in some sense. The upshot is that as computational methods have advanced, we've become able to solve absolute-error problems numerically, leading to the rise of the subfield of robust statistical methods . In fact, there's a fairly nice correspondence between some squared-error and absolute-error methods: Squared error | Absolute error
========================|============================
Mean | Median
Variance | Expected absolute deviation
Gaussian distribution | Laplace distribution
Linear regression | Quantile regression
PCA | Robust PCA
Ridge regression | LASSO As we get better at modern numerical methods, no doubt we'll find other useful absolute-error-based techniques, and the gap between squared-error and absolute-error methods will narrow. But because of the connection between the squared error and the Gaussian distribution, I don't think it will ever go away entirely. | {
"source": [
"https://stats.stackexchange.com/questions/127605",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62280/"
]
} |
128,616 | I kind of understand what "overfitting" means, but I need help as to how to come up with a real-world example that applies to overfitting. | Here's a nice example of presidential election time series models from xkcd: There have only been 56 presidential elections and 43 presidents. That is not a lot of data to learn from. When the predictor space expands to include things like having false teeth and the Scrabble point value of names, it's pretty easy for the model to go from fitting the generalizable features of the data (the signal) and to start matching the noise. When this happens, the fit on the historical data may improve, but the model will fail miserably when used to make inferences about future presidential elections. | {
"source": [
"https://stats.stackexchange.com/questions/128616",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/63568/"
]
} |
128,839 | I just wanted to ask which are in your opinion the best available books on bootstrap out there. By this I don't necessarily only mean the one written by its developers. Could you please indicate which textbook is according to you the best for bootstrap that covers the following criteria? The philosophical/epistemological basis for the technique that
lists domain of applicability, strengths and weaknesses, importance
for model-selection? A good set of simple examples that show implementation,
philosophical underpinnings, preferably with Matlab? | There are two "classic" ones: Efron, B. & Tibshirani, R. J. (1993). An introduction to the bootstrap . London: Chapman & Hall/CRC. Davison, A. C. & Hinkley, D. V. (2009). Bootstrap methods and their application . New York, NY: Cambridge University Press. The first one is very readable and gives you good idea what bootstrap is and what is the general reasoning behind this method. It also provides many examples and practical hints about using bootstrap in real life. The second is a really extensive review of different usages of bootstrap, with lots of examples and also examples of code written in R. I would say that those two alone give you pretty complete overview of the method and could lead you starting from the basics, up to pretty advanced topics. If you don't know much on bootstrap yet I'll suggest starting with Efron & Tibshirani since it is written in a much simpler language and walks you through the topic step by step from the basics. Davison & Hinkley is a little bit tougher to read but provides you with many practical information and details. | {
"source": [
"https://stats.stackexchange.com/questions/128839",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55158/"
]
} |
128,926 | Call:
glm(formula = darters ~ river + pH + temp, family = poisson, data = darterData)
Deviance Residuals:
Min 1Q Median 3Q Max
-3.7422 -1.0257 0.0027 0.7169 3.5347
Coefficients:
Estimate Std.Error z value Pr(>|z|)
(Intercept) 3.144257 0.218646 14.381 < 2e-16 ***
riverWatauga -0.049016 0.051548 -0.951 0.34166
pH 0.086460 0.029821 2.899 0.00374 **
temp -0.059667 0.009149 -6.522 6.95e-11 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 233.68 on 99 degrees of freedom
Residual deviance: 187.74 on 96 degrees of freedom
AIC: 648.21 I want to know how to interpret each parameter estimate in the table above. | I don't think the title of your question accurately captures what you're asking for. The question of how to interpret the parameters in a GLM is very broad because the GLM is a very broad class of models. Recall that a GLM models a response variable $y$ that is assumed to follow a known distribution from the exponential family, and that we have chosen an invertible function $g$ such that $$
\mathrm{E}\left[y\,|\,x\right] = g^{-1}{\left(x_0 + x_1\beta_1 + \dots + x_J\beta_J\right)}
$$ for $J$ predictor variables $x$ . In this model, the interpretation of any particular parameter $\beta_j$ is the rate of change of $g(y)$ with respect to $x_j$ . Define $\mu \equiv \mathrm{E}{\left[y\,|\,x\right]} = g^{-1}{\left(x\right)}$ and $\eta \equiv x \cdot \beta$ to keep the notation clean. Then, for any $j \in \{1,\dots,J\}$ , $$
\beta_j = \frac{\partial\,\eta}{\partial\,x_j} = \frac{\partial\,g(\mu)}{\partial\,x_j} \text{.}
$$ Now define $\mathfrak{e}_j$ to be a vector of $J-1$ zeroes and a single $1$ in the $j$ th position, so that for example if $J=5$ then $\mathfrak{e}_3 = \left(0,0,1,0,0\right)$ . Then $$
\beta_j = g{\left(\mathrm{E}{\left[y\,|\,x + \mathfrak{e}_j \right]}\right)} - g{\left(\mathrm{E}{\left[y\,|\,x\right]}\right)}
$$ Which just means that $\beta_j$ is the effect on $\eta$ of a unit increase in $x_j$ . You can also state the relationship in this way: $$
\frac{\operatorname{\partial}\mathrm{E}{\left[y\,|\,x\right]}}{\operatorname{\partial}x_j} = \frac{\operatorname{\partial}\mu}{\operatorname{\partial}x_j} = \frac{\operatorname{d}\mu}{\operatorname{d}\eta}\frac{\operatorname{\partial}\eta}{\operatorname{\partial}x_j} = \frac{\operatorname{\partial}\mu}{\operatorname{\partial}\eta} \beta_j = \frac{\operatorname{d}g^{-1}}{\operatorname{d}\eta} \beta_j
$$ and $$
\mathrm{E}{\left[y\,|\,x + \mathfrak{e}_j \right]} - \mathrm{E}{\left[y\,|\,x\right]} \equiv \operatorname{\Delta_j} \hat y = g^{-1}{\left( \left(x + \mathfrak{e}_j\right)\beta \right)} - g^{-1}{\left( x\,\beta \right)}
$$ Without knowing anything about $g$ , that's as far as we can get. $\beta_j$ is the effect on $\eta$ , on the transformed conditional mean of $y$ , of a unit increase in $x_j$ , and the effect on the conditional mean of $y$ of a unit increase in $x_j$ is $g^{-1}{\left(\beta\right)}$ . But you seem to be asking specifically about Poisson regression using R's default link function, which in this case is the natural logarithm. If that's the case, you're asking about a specific kind of GLM in which $y \sim \mathrm{Poisson}{\left(\lambda\right)}$ and $g = \ln$ . Then we can get some traction with regard to a specific interpretation. From what I said above, we know that $\frac{\operatorname{\partial}\mu}{\operatorname{\partial}x_j} = \frac{\operatorname{d}g^{-1}}{\operatorname{d}\eta} \beta_j$ . And since we know $g(\mu) = \ln(\mu)$ , we also know that $g^{-1}(\eta) = e^\eta$ . We also happen to know that $\frac{\operatorname{d}e^\eta}{\operatorname{d}\eta} = e^\eta$ , so we can say that $$
\frac{\operatorname{\partial}\mu}{\operatorname{\partial}x_j} = \frac{\operatorname{\partial}\mathrm{E}{\left[y\,|\,x\right]}}{\operatorname{\partial}x_j} = e^{x_0 + x_1\beta_1 + \dots + x_J\beta_J}\beta_j
$$ which finally means something tangible: Given a very small change in $x_j$ , the fitted $\hat y$ changes by $\hat y\,\beta_j$ . Note: this approximation can actually work for changes as large as 0.2, depending on how much precision you need. And using the more familiar unit change interpretation, we have: \begin{align}
\operatorname{\Delta_j} \hat y &= e^{ x_0 + x_1\beta_1 + \dots + \left(x_j + 1\right)\,\beta_j + \dots + x_J\beta_J } - e^{x_0 + x_1\beta_1 + \dots + x_J\beta_J} \\
&= e^{ x_0 + x_1\beta_1 + \dots + x_J\beta_J + \beta_j} - e^{x_0 + x_1\beta_1 + \dots + x_J\beta_J} \\
&= e^{ x_0 + x_1\beta_1 + \dots + x_J\beta_J}e^{\beta_j} - e^{x_0 + x_1\beta_1 + \dots + x_J\beta_J} \\
&= e^{ x_0 + x_1\beta_1 + \dots + x_J\beta_J} \left( e^{\beta_j} - 1 \right)
\end{align} which means Given a unit change in $x_j$ , the fitted $\hat y$ changes by $\hat y \left( e^{\beta_j} - 1 \right)$ . There are three important pieces to note here: The effect of a change in the predictors depends on the level of the response. An additive change in the predictors has a multiplicative effect on the response. You can't interpret the coefficients just by reading them (unless you can compute arbitrary exponentials in your head). So in your example, the effect of increasing pH by 1 is to increase $\ln \hat y$ by $\hat y \left( e^{0.09} - 1 \right)$ ; that is, to multiply $\hat y$ by $e^{0.09} \approx 1.09$ . It looks like your outcome is the number of darters you observe in some fixed unit of time (say, a week). So if you're observing 100 darters a week at a pH of 6.7, raising the pH of the river to 7.7 means you can now expect to see 109 darters a week. | {
"source": [
"https://stats.stackexchange.com/questions/128926",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/63699/"
]
} |
129,017 | Can I call a model wherein Bayes' Theorem is used a "Bayesian model"? I am afraid such a definition might be too broad. So what exactly is a Bayesian model? | In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some prior distribution for the relevant unknown parameters and the likelihood from the model. i.e. from a distributional model of some form, $f(X_i|\mathbf{\theta})$, and a prior $p(\mathbf{\theta})$, someone might seek to obtain the posterior $p(\mathbf{\theta}|\mathbf{X})$. A simple example of a Bayesian model is discussed in this question , and in the comments of this one - Bayesian linear regression, discussed in more detail in Wikipedia here . Searches turn up discussions of a number of Bayesian models here. But there are other things one might try to do with a Bayesian analysis besides merely fit a model - see, for example, Bayesian decision theory. | {
"source": [
"https://stats.stackexchange.com/questions/129017",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26881/"
]
} |
129,023 | What are the best practices for fitting a binomial classification model when the classes are very imbalanced? For example, 99.9% 1's and 0.1% 0's. | In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some prior distribution for the relevant unknown parameters and the likelihood from the model. i.e. from a distributional model of some form, $f(X_i|\mathbf{\theta})$, and a prior $p(\mathbf{\theta})$, someone might seek to obtain the posterior $p(\mathbf{\theta}|\mathbf{X})$. A simple example of a Bayesian model is discussed in this question , and in the comments of this one - Bayesian linear regression, discussed in more detail in Wikipedia here . Searches turn up discussions of a number of Bayesian models here. But there are other things one might try to do with a Bayesian analysis besides merely fit a model - see, for example, Bayesian decision theory. | {
"source": [
"https://stats.stackexchange.com/questions/129023",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16596/"
]
} |
129,031 | Suppose we distributed $100$ coins to $10$ persons and the $i$-th person got ${x}_{i}$ coins, how to judge the distribution $X=\{{x}_{1}, {x}_{2}, ..., {x}_{n}\}$ (e.g., $X=\{5, 20, 15, 5, 10, 10, 10, 15, 5, 5\}$) is (almost) balanced or not? Is there a mathematical definition or empirical criterion of the unbalancedness? | In essence, one where inference is based on using Bayes theorem to obtain a posterior distribution for a quantity or quantities of interest form some model (such as parameter values) based on some prior distribution for the relevant unknown parameters and the likelihood from the model. i.e. from a distributional model of some form, $f(X_i|\mathbf{\theta})$, and a prior $p(\mathbf{\theta})$, someone might seek to obtain the posterior $p(\mathbf{\theta}|\mathbf{X})$. A simple example of a Bayesian model is discussed in this question , and in the comments of this one - Bayesian linear regression, discussed in more detail in Wikipedia here . Searches turn up discussions of a number of Bayesian models here. But there are other things one might try to do with a Bayesian analysis besides merely fit a model - see, for example, Bayesian decision theory. | {
"source": [
"https://stats.stackexchange.com/questions/129031",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/63783/"
]
} |
129,200 | I am using quantile regression to find predictors of 90th percentile of my data. I am doing this in R using the quantreg package. How can I determine $r^2$ for quantile regression which will indicate how much of variability is being explained by predictor variables? What I really want to know: "Any method I can use to find how much of variability is being explained?". Significance levels by P values is available in output of command: summary(rq(formula,tau,data)) . How can I get goodness of fit? | Koenker and Machado$^{[1]}$ describe $R^1$, a local measure of goodness of fit at the particular ($\tau$) quantile. Let $V(\tau) = \min_{b}\sum \rho_\tau(y_i-x_i'b)$ Let $\hat{\beta}(\tau)$ and $\tilde{\beta}(\tau)$ be the coefficient estimates for the full model, and a restricted model, and let $\hat{V}$ and $\tilde{V}$ be the corresponding $V$ terms. They define the goodness of fit criterion $R^1(\tau) = 1-\frac{\hat{V}}{\tilde{V} }$. Koenker gives code for $V$ here , rho <- function(u,tau=.5)u*(tau - (u < 0))
V <- sum(rho(f$resid, f$tau)) So if we compute $V$ for a model with an intercept-only ($\tilde{V}$ - or V0 in the code snippet below) and then an unrestricted model ($\hat{V}$), we can calculate an R1 <- 1-Vhat/V0 that's - at least notionally - somewhat like the usual $R^2$. Edit: In your case, of course, the second argument, which would be put in where f$tau is in the call in the second line of code, will be whichever value of tau you used. The value in the first line merely sets the default. 'Explaining variance about the mean' is really not what you're doing with quantile regression, so you shouldn't expect to have a really equivalent measure. I don't think the concept of $R^2$ translates well to quantile regression. You can define various more-or-less analogous quantities, as here, but no matter what you choose, you won't have most of the properties real $R^2$ has in OLS regression. You need to be clear about what properties you need and what you don't -- in some cases it may be possible to have a measure that does what you want. -- $[1]$ Koenker, R and Machado, J (1999), Goodness of Fit and Related Inference Processes for Quantile Regression, Journal of the American Statistical Association, 94 :448, 1296-1310 | {
"source": [
"https://stats.stackexchange.com/questions/129200",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/56211/"
]
} |
129,991 | When there is measurement error in the independent variable I have understood that the results will be biased against 0. When the dependent variable is measured with error they say it just affects the standard errors but this doesn't make much sense to me because we are estimating the effect of $X$ not on the original variable $Y$ but on some other $Y$ plus an error. So how does this not affect the estimates? In this case can I also use instrumental variables to remove this problem? | When you want to estimate a simple model like
$$Y_i = \alpha + \beta X_i + \epsilon_i$$
and instead of the true $Y_i$ you only observe it with some error $\widetilde{Y}_i = Y_i + \nu_i$ which is such that it is uncorrelated with $X$ and $\epsilon$, if you regress
$$\widetilde{Y}_i = \alpha + \beta X_i + \epsilon_i$$
your estimated $\beta$ is
$$
\begin{align}
\widehat{\beta} &= \frac{Cov(\widetilde{Y}_i,X_i)}{Var(X_i)} \newline
&= \frac{Cov(Y_i + \nu_i,X_i)}{Var(X_i)} \newline
&= \frac{Cov(\alpha + \beta X_i + \epsilon_i + \nu_i,X_i)}{Var(X_i)} \newline
&= \frac{Cov(\alpha ,X_i)}{Var(X_i)} + \beta\frac{Cov(X_i,X_i)}{Var(X_i)} + \frac{Cov(\epsilon_i,X_i)}{Var(X_i)} + \frac{Cov(\nu_i,X_i)}{Var(X_i)} \newline
&= \beta \frac{Var(X_i)}{Var(X_i)} \newline
&= \beta
\end{align}
$$
because the covariance between a random variable and a constant ($\alpha$) is zero as well as the covariances between $X_i$ and $\epsilon_i, \nu_i$ since we assumed that they are uncorrelated. So you see that your coefficient is consistently estimated. The only worry is that $\widetilde{Y}_i = Y_i + \nu_i = \alpha + \beta X_i + \epsilon_i + \nu_i$ gives you an additional term in the error which reduces the power of your statistical tests. In very bad cases of such measurement error in the dependent variable you may not find a significant effect even though it might be there in reality. Generally, instrumental variables will not help you in this case because they tend to be even more imprecise than OLS and they can only help with measurement error in the explanatory variable. | {
"source": [
"https://stats.stackexchange.com/questions/129991",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/64279/"
]
} |
130,017 | I have several questions regarding the ridge penalty in the least squares context: $$\beta_{ridge} = (\lambda I_D + X'X)^{-1}X'y$$ 1) The expression suggests that the covariance matrix of X is shrunk towards a diagonal matrix, meaning that (assuming that variables are standardized before the procedure) correlation among input variables will be lowered. Is this interpretation correct? 2) If it is a shrinkage application why is it not formulated in the lines of $(\lambda I_D + (1-\lambda)X'X)$, assuming that we can somehow restrict lambda to [0,1] range with a normalization. 3) What can be a normalization for $\lambda$ so that it can be restricted to a standard range like [0,1]. 4) Adding a constant to the diagonal will affect all eigenvalues. Would it be better to attack only the singular or near singular values? Is this equivalent to applying PCA to X and retaining top-N principal components before regression or does it have a different name (since it doesn't modify the cross covariance calculation)? 5) Can we regularize the cross covariance, or does it have any use, meaning $$\beta_{ridge} = (\lambda I_D + X'X)^{-1}(\gamma X'y)$$ where a small $\gamma$ will lower the cross covariance. Obviously this lowers all $\beta$s equally, but perhaps there is a smarter way like hard/soft thresholding depending on covariance value. | Good questions! Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing ridge penalty effectively lowers these correlations. I think this is partly tradition, partly the fact that ridge regression formula as stated in your first equation follows from the following cost function: $$L=\| \mathbf y - \mathbf X \beta \|^2 + \lambda \|\beta\|^2.$$ If $\lambda=0$, the second term can be dropped, and minimizing the first term ("reconstruction error") leads to the standard OLS formula for $\beta$. Keeping the second term leads to the formula for $\beta_\mathrm{ridge}$. This cost function is mathematically very convenient to deal with, and this might be one of the reasons for preferring "non-normalized" lambda. One possible way to normalize $\lambda$ is to scale it by the total variance $\mathrm{tr}(\mathbf X^\top \mathbf X)$, i.e. to use $\lambda \mathrm{tr}(\mathbf X^\top \mathbf X)$ instead of $\lambda$. This would not necessarily confine $\lambda$ to $[0,1]$, but would make it "dimensionless" and would probably result in optimal $\lambda$ being less then $1$ in all practical cases (NB: this is just a guess!). "Attacking only small eigenvalues" does have a separate name and is called principal components regression. The connection between PCR and ridge regression is that in PCR you effectively have a "step penalty" cutting off all the eigenvalues after a certain number, whereas ridge regression applies a "soft penalty", penalizing all eigenvalues, with smaller ones getting penalized more. This is nicely explained in The Elements of Statistical Learning by Hastie et al. (freely available online), section 3.4.1. See also my answer in Relationship between ridge regression and PCA regression . I have never seen this done, but note that you could consider a cost function in the form $$L=\| \mathbf y - \mathbf X \beta \|^2 + \lambda \|\beta-\beta_0\|^2.$$ This shrinks your $\beta$ not to zero, but to some other pre-defined value $\beta_0$. If one works out the math, you will arrive to the optimal $\beta$ given by $$\beta = (\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1} (\mathbf X^\top \mathbf y + \lambda \beta_0),$$ which perhaps can be seen as "regularizing cross-covariance"? | {
"source": [
"https://stats.stackexchange.com/questions/130017",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20980/"
]
} |
130,067 | I know that the mean of the sum of independent variables is the sum of the means of each independent variable. Does this apply to dependent variables as well? | Expectation (taking the mean) is a linear operator . This means that , amongst other things, $\mathbb{E}(X + Y) = \mathbb{E}(X) + \mathbb{E}(Y)$ for any two random variables $X$ and $Y$ (for which the expectations exist), regardless of whether they are independent or not. We can generalise (e.g. by induction ) so that $\mathbb{E}\left(\sum_{i=1}^n X_i\right) = \sum_{i=1}^n \mathbb{E}(X_i)$ so long as each expectation $\mathbb{E}(X_i)$ exists. So yes, the mean of the sum is the same as the sum of the mean even if the variables are dependent. But note that this does not apply for the variance! So while $\mathrm{Var}(X + Y) = \mathrm{Var}(X) + \mathrm{Var}(Y)$ for independent variables, or even variables which are dependent but uncorrelated , the general formula is $\mathrm{Var}(X + Y) = \mathrm{Var}(X) + \mathrm{Var}(Y) + 2\mathrm{Cov}(X, Y)$ where $\mathrm{Cov}$ is the covariance of the variables. | {
"source": [
"https://stats.stackexchange.com/questions/130067",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/64322/"
]
} |
130,069 | What is the distribution of the coefficient of determination, or R squared, $R^2$, in linear univariate multiple regression under the null hypothesis $H_0:\beta=0$? How does it depend on the number of predictors $k$ and number of samples $n>k$? Is there a closed-form expression for the mode of this distribution? In particular, I have a feeling that for simple regression (with one predictor $x$) this distribution has mode at zero, but for multiple regression the mode is at a non-zero positive value. If this is indeed true, is there an intuitive explanation of this "phase transition"? Update As @Alecos showed below, the distribution indeed peaks at zero when $k=2$ and $k=3$ and not at zero when $k>3$. I feel that there should be a geometrical view on this phase transition. Consider geometrical view of OLS: $\mathbf y$ is a vector in $\mathbb R^n$, $\mathbf X$ defines a $k$-dimensional subspace there. OLS amounts to projecting $\mathbf y$ onto this subspace, and $R^2$ is squared cosine of the angle between $\mathbf y$ and its projection $\hat{\mathbf y}$. Now, from @Alecos's answer it follows that if all vectors are random, then the probability distribution of this angle will peak at $90^\circ$ for $k=2$ and $k=3$, but will have a mode at some other value $<90^\circ$ for $k>3$. Why?! Update 2: I am accepting @Alecos'es answer, but still have a feeling that I am missing some important insight here. If anybody ever suggests any other (geometrical or not) view on this phenomenon that would make it "obvious", I will be happy to offer a bounty. | For the specific hypothesis (that all regressor coefficients are zero, not including the constant term, which is not examined in this test) and under normality, we know (see eg Maddala 2001, p. 155, but note that there, $k$ counts the regressors without the constant term, so the expression looks a bit different) that the statistic $$F = \frac {n-k}{k-1}\frac {R^2}{1-R^2}$$ is distributed as a central $F(k-1, n-k)$ random variable. Note that although we do not test the constant term, $k$ counts it also. Moving things around, $$(k-1)F - (k-1)FR^2 = (n-k)R^2 \Rightarrow (k-1)F = R^2\big[(n-k) + (k-1)F\big]$$ $$\Rightarrow R^2 = \frac {(k-1)F}{(n-k) + (k-1)F}$$ But the right hand side is distributed as a Beta distribution , specifically $$R^2 \sim Beta\left (\frac {k-1}{2}, \frac {n-k}{2}\right)$$ The mode of this distribution is $$\text{mode}R^2 = \frac {\frac {k-1}{2}-1}{\frac {k-1}{2}+ \frac {n-k}{2}-2} =\frac {k-3}{n-5} $$ FINITE & UNIQUE MODE From the above relation we can infer that for the distribution to have a unique and finite mode we must have $$k\geq 3, n >5 $$ This is consistent with the general requirement for a Beta distribution, which is $$\{\alpha >1 , \beta \geq 1\},\;\; \text {OR}\;\; \{\alpha \geq1 , \beta > 1\}$$ as one can infer from this CV thread or read here . Note that if $\{\alpha =1 , \beta = 1\}$, we obtain the Uniform distribution, so all the density points are modes (finite but not unique). Which creates the question: Why, if $k=3, n=5$, $R^2$ is distributed as a $U(0,1)$? IMPLICATIONS Assume that you have $k=5$ regressors (including the constant), and $n=99$ observations. Pretty nice regression, no overfitting. Then $$R^2\Big|_{\beta=0} \sim Beta\left (2, 47\right), \text{mode}R^2 = \frac 1{47} \approx 0.021$$ and density plot Intuition please: this is the distribution of $R^2$ under the hypothesis that no regressor actually belongs to the regression. So a) the distribution is independent of the regressors, b) as the sample size increases its distribution is concentrated towards zero as the increased information swamps small-sample variability that may produce some "fit" but also c) as the number of irrelevant regressors increases for given sample size, the distribution concentrates towards $1$, and we have the "spurious fit" phenomenon. But also, note how "easy" it is to reject the null hypothesis: in the particular example, for $R^2=0.13$ cumulative probability has already reached $0.99$, so an obtained $R^2>0.13$ will reject the null of "insignificant regression" at significance level $1$%. ADDENDUM To respond to the new issue regarding the mode of the $R^2$ distribution, I can offer the following line of thought (not geometrical), which links it to the "spurious fit" phenomenon: when we run least-squares on a data set, we essentially solve a system of $n$ linear equations with $k$ unknowns (the only difference from high-school math is that back then we called "known coefficients" what in linear regression we call "variables/regressors", "unknown x" what we now call "unknown coefficients", and "constant terms" what we know call "dependent variable"). As long as $k<n$ the system is over-identified and there is no exact solution, only approximate -and the difference emerges as "unexplained variance of the dependent variable", which is captured by $1-R^2$. If $k=n$ the system has one exact solution (assuming linear independence). In between, as we increase the number of $k$, we reduce the "degree of overidentification" of the system and we "move towards" the single exact solution. Under this view, it makes sense why $R^2$ increases spuriously with the addition of irrelevant regressions, and consequently, why its mode moves gradually towards $1$, as $k$ increases for given $n$. | {
"source": [
"https://stats.stackexchange.com/questions/130069",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28666/"
]
} |
130,102 | The book "Introduction to Machine learning" by Ethem Alpaydın states that the VC dimension of an axis-aligned rectangle is 4. But how can a rectangle shatter a set of four collinear points with alternate positive and negative points?? Can someone explain and prove the VC dimension of a rectangle? | tl;dr: You've got the definition of VC dimension incorrect. The VC dimension of rectangles is the cardinality of the maximum set of points that can be shattered by a rectangle. The VC dimension of rectangles is 4 because there exists a set of 4 points that can be shattered by a rectangle and any set of 5 points can not be shattered by a rectangle. So, while it's true that a rectangle cannot shatter a set of four collinear points with alternate positive and negative, the VC-dimension is still 4 because there exists one configuration of 4 points which can be shattered. | {
"source": [
"https://stats.stackexchange.com/questions/130102",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55240/"
]
} |
130,262 | Books and discussions often state that when facing problems (of which there are a few) with a predictor, log-transformimg it is a possibility. Now, I understand that this depends on distributions and normality in predictors is not an assumption of regression; but log transforming does make data more uniform, less affected by outliers and so on. I thought about log transforming all my continuous variables which are not of main interesr, ie variables I only adjust for. Is that wrong? Good? Useless? | Now, I understand that this depends on distributions and normality in predictors log transforming does make data more uniform As a general claim, this is false --- but even if it were the case, why would uniformity be important? Consider, for example, i) a binary predictor taking only the values 1 and 2. Taking logs would leave it as a binary predictor taking only the values 0 and log 2. It doesn't really affect anything except the intercept and scaling of terms involving this predictor. Even the p-value of the predictor would be unchanged, as would the fitted values. ii) consider a left-skew predictor. Now take logs. It typically becomes more left skew. iii) uniform data becomes left skew (it's often not always so extreme a change, though) less affected by outliers As a general claim, this is false. Consider low outliers in a predictor. I thought about log transforming all my continuous variables which are not of main interest To what end? If originally the relationships were linear, they would not longer be. And if they were already curved, doing this automatically might make them worse (more curved), not better. -- Taking logs of a predictor (whether of primary interest or not) might sometimes be suitable, but it's not always so. | {
"source": [
"https://stats.stackexchange.com/questions/130262",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35413/"
]
} |
130,389 | I'm not looking for a plug and play method like BEST in R but rather a mathematical explanation of what are some Bayesian methods I can use to test the difference between the mean of two samples. | This is a good question, that seems to pop up a lot: link 1 , link 2 . The paper Bayesian Estimation Superseeds the T-Test that Cam.Davidson.Pilon pointed out is an excellent resource on this subject. It is also very recent, published in 2012, which I think in part is due to the current interest in the area. I will try to summarize a mathematical explanation of a Bayesian alternative to the two sample t-test. This summary is similar to the BEST paper which assess the difference in two samples by comparing the difference in their posterior distributions (explained below in R). set.seed(7)
#create samples
sample.1 <- rnorm(8, 100, 3)
sample.2 <- rnorm(10, 103, 7)
#we need a pooled data set for estimating parameters in the prior.
pooled <- c(sample.1, sample.2)
par(mfrow=c(1, 2))
hist(sample.1)
hist(sample.2) In order to compare the sample means we need to estimate what they are. The Bayesian method to do so uses Bayes' theorem: P(A|B) = P(B|A) * P(A)/P(B) (the syntax of P(A|B) is read as the probability of A given B) Thanks to modern numerical methods we can ignore the probability of B, P(B), and use the proportional statment: P(A|B) $\propto$ P(B|A)*P(A) In Bayesian vernacular the posterior is proportional to the likelihood times the prior Applying Bayes' theory to our problem where we want to know the means of samples given some data we get $P(mean.1 | sample.1)$ $\propto$ $P(sample.1 | mean.1) * P(mean.1)$ . The first term on the right is the likelihood, $P(sample.1 | mean.1)$ , which is the probability of observing the sample data given mean.1. The second term is the prior, $P(mean.1)$ , which is simply the probability of mean.1. Figuring out appropriate priors is still a bit of an art and is one of the biggest critisims of Bayesian methods. Let's put it in code. Code makes everything better. likelihood <- function(parameters){
mu1=parameters[1]; sig1=parameters[2]; mu2=parameters[3]; sig2=parameters[4]
prod(dnorm(sample.1, mu1, sig1)) * prod(dnorm(sample.2, mu2, sig2))
}
prior <- function(parameters){
mu1=parameters[1]; sig1=parameters[2]; mu2=parameters[3]; sig2=parameters[4]
dnorm(mu1, mean(pooled), 1000*sd(pooled)) * dnorm(mu2, mean(pooled), 1000*sd(pooled)) * dexp(sig1, rate=0.1) * dexp(sig2, 0.1)
} I made some assumptions in the prior that need to be justified. To keep the priors from prejudicing the estimated mean I wanted to make them broad and uniform-ish over plausible values with the aim of letting the data produce the features of the posterior. I used recommended setting from BEST and distributed the mu's normally with mean = mean(pooled) and a broad standard deviation = 1000*sd(pooled). The standard deviations I set to a broad exponential distribution, because I wanted a broad unbounded distribution. Now we can make the posterior posterior <- function(parameters) {likelihood(parameters) * prior(parameters)} We will sample the posterior distribution using a markov chain monte carlo (MCMC) with Metropolis Hastings modification. Its easiest to understand with code. #starting values
mu1 = 100; sig1 = 10; mu2 = 100; sig2 = 10
parameters <- c(mu1, sig1, mu2, sig2)
#this is the MCMC /w Metropolis method
n.iter <- 10000
results <- matrix(0, nrow=n.iter, ncol=4)
results[1, ] <- parameters
for (iteration in 2:n.iter){
candidate <- parameters + rnorm(4, sd=0.5)
ratio <- posterior(candidate)/posterior(parameters)
if (runif(1) < ratio) parameters <- candidate #Metropolis modification
results[iteration, ] <- parameters
} The results matrix is a list of samples from the posterior distribution for each parameter which we can use to answer our original question: Is sample.1 different than sample.2? But first to avoid affects from the starting values we will "burn-in" the first 500 values of the chain. #burn-in
results <- results[500:n.iter,] Now, is sample.1 different than sample.2? mu1 <- results[,1]
mu2 <- results[,3]
hist(mu1 - mu2) mean(mu1 - mu2 < 0)
[1] 0.9953689 From this analysis I would conclude there is a 99.5% chance that the mean for sample.1 is less than the mean for sample.2. An advantage of the Bayesian approach, as pointed out in the BEST paper, is that it can make strong theories. E.G. what is the probability that sample.2 is 5 units bigger than sample.1. mean(mu2 - mu1 > 5)
[1] 0.9321124 We would conclude that there is a 93% chance that the mean of sample.2 is 5 unit greater than sample.1. An observant reader would find that interesting because we know the true populations have means of 100 and 103 respectively. This is most likely due to the small sample size, and choice of using a normal distribution for the likelihood. I will end this answer with a warning: This code is for teaching purposes. For a real analysis use RJAGS and depending on your sample size fit a t-distribution for the likelihood. If there is interest I will post a t-test using RJAGS. EDIT:
As requested here is a JAGS model. model.str <- 'model {
for (i in 1:Ntotal) {
y[i] ~ dt(mu[x[i]], tau[x[i]], nu)
}
for (j in 1:2) {
mu[j] ~ dnorm(mu_pooled, tau_pooled)
tau[j] <- 1 / pow(sigma[j], 2)
sigma[j] ~ dunif(sigma_low, sigma_high)
}
nu <- nu_minus_one + 1
nu_minus_one ~ dexp(1 / 29)
}'
# Indicator variable
x <- c(rep(1, length(sample.1)), rep(2, length(sample.2)))
cpd.model <- jags.model(textConnection(model.str),
data=list(y=pooled,
x=x,
mu_pooled=mean(pooled),
tau_pooled=1/(1000 * sd(pooled))^2,
sigma_low=sd(pooled) / 1000,
sigma_high=sd(pooled) * 1000,
Ntotal=length(pooled)))
update(cpd.model, 1000)
chain <- coda.samples(model = cpd.model, n.iter = 100000,
variable.names = c('mu', 'sigma'))
rchain <- as.matrix(chain)
hist(rchain[, 'mu[1]'] - rchain[, 'mu[2]'])
mean(rchain[, 'mu[1]'] - rchain[, 'mu[2]'] < 0)
mean(rchain[, 'mu[2]'] - rchain[, 'mu[1]'] > 5) | {
"source": [
"https://stats.stackexchange.com/questions/130389",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61501/"
]
} |
130,432 | I'm looking for an intuitive explanation for the following questions: In statistics and information theory, what's the difference between Bhattacharyya distance and KL divergence, as measures of the difference between two discrete probability distributions? Do they have absolutely no relationships and measure the distance between two probability distribution in totally different way? | The Bhattacharyya coefficient is defined as $$D_B(p,q) = \int \sqrt{p(x)q(x)}\,\text{d}x$$ and can be turned into a distance $d_H(p,q)$ as $$d_H(p,q)=\{1-D_B(p,q)\}^{1/2}$$ which is called the Hellinger distance . A connection between this Hellinger distance and the Kullback-Leibler divergence is $$d_{KL}(p\|q) \geq 2 d_H^2(p,q) = 2 \{1-D_B(p,q)\}\,,$$ since \begin{align*}
d_{KL}(p\|q) &= \int \log \frac{p(x)}{q(x)}\,p(x)\text{d}x\\
&= 2\int \log \frac{\sqrt{p(x)}}{\sqrt{q(x)}}\,p(x)\text{d}x\\
&= 2\int -\log \frac{\sqrt{q(x)}}{\sqrt{p(x)}}\,p(x)\text{d}x\\
&\ge 2\int \left\{1-\frac{\sqrt{q(x)}}{\sqrt{p(x)}}\right\}\,p(x)\text{d}x\\
&= \int \left\{1+1-2\sqrt{p(x)}\sqrt{q(x)}\right\}\,\text{d}x\\
&= \int \left\{\sqrt{p(x)}-\sqrt{q(x)}\right\}^2\,\text{d}x\\
&= 2d_H(p,q)^2
\end{align*} However, this is not the question: if the Bhattacharyya distance is defined as $$d_B(p,q)\stackrel{\text{def}}{=}-\log D_B(p,q)\,,$$ then \begin{align*}d_B(p,q)=-\log D_B(p,q)&=-\log \int \sqrt{p(x)q(x)}\,\text{d}x\\
&\stackrel{\text{def}}{=}-\log \int h(x)\,\text{d}x\\
&= -\log \int \frac{h(x)}{p(x)}\,p(x)\,\text{d}x\\
&\le \int -\log \left\{\frac{h(x)}{p(x)}\right\}\,p(x)\,\text{d}x\\
&= \int \frac{-1}{2}\log \left\{\frac{h^2(x)}{p^2(x)}\right\}\,p(x)\,\text{d}x\\
\end{align*} Hence, the inequality between the two distances is $${d_{KL}(p\|q)\ge 2d_B(p,q)\,.}$$ One could then wonder whether this inequality follows from the first one. It happens to be the opposite: since $$-\log(x)\ge 1-x\qquad\qquad 0\le x\le 1\,,$$ we have the complete ordering $${d_{KL}(p\|q)\ge 2d_B(p,q)\ge 2d_H(p,q)^2\,.}$$ | {
"source": [
"https://stats.stackexchange.com/questions/130432",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60194/"
]
} |
130,775 | I suppose I get frustrated every time I hear someone say that non-normality of residuals and /or heteroskedasticity violates OLS assumptions. To estimate parameters in an OLS model neither of these assumptions are necessary by the Gauss-Markov theorem. I see how this matters in Hypothesis Testing for the OLS model, because assuming these things give us neat formulas for t-tests, F-tests, and more general Wald statistics. But it is not too hard to do hypothesis testing without them. If we drop just homoskedasticity we can calculate robust standard errors and clustered standard errors easily. If we drop normality altogether we can use bootstrapping and, given another parametric specification for the error terms, likelihood ratio, and Lagrange multiplier tests. It's just a shame that we teach it this way, because I see a lot of people struggling with assumptions they do not have to meet in the first place. Why is it that we stress these assumptions so heavily when we have the ability to easily apply more robust techniques? Am I missing something important? | In Econometrics, we would say that non-normality violates the conditions of the Classical Normal Linear Regression Model, while heteroskedasticity violates both the assumptions of the CNLR and of the Classical Linear Regression Model. But those that say "...violates OLS" are also justified: the name Ordinary Least-Squares comes from Gauss directly and essentially refers to normal errors. In other words "OLS" is not an acronym for least-squares estimation (which is a much more general principle and approach), but of the CNLR. Ok, this was history, terminology and semantics. I understand the core of the OP's question as follows: "Why should we emphasize the ideal, if we have found solutions for the case when it is not present?" (Because the CNLR assumptions are ideal, in the sense that they provide excellent least-square estimator properties "off-the-shelf", and without the need to resort to asymptotic results. Remember also that OLS is maximum likelihood when the errors are normal). As an ideal, it is a good place to start teaching . This is what we always do in teaching any kind of subject: "simple" situations are "ideal" situations, free of the complexities one will actually encounter in real life and real research, and for which no definite solutions exist . And this is what I find problematic about the OP's post: he writes about robust standard errors and bootstrap as though they are "superior alternatives", or foolproof solutions to the lack of the said assumptions under discussion for which moreover the OP writes "..assumptions that people do not have to meet" Why? Because there are some methods of dealing with the situation, methods that have some validity of course, but they are far from ideal? Bootstrap and heteroskedasticity-robust standard errors are not the solutions -if they indeed were, they would have become the dominant paradigm, sending the CLR and the CNLR to the history books. But they are not. So we start from the set of assumptions that guarantees those estimator properties that we have deemed important (it is another discussion whether the properties designated as desirable are indeed the ones that should be), so that we keep visible that any violation of them, has consequences which cannot be fully offset through the methods we have found in order to deal with the absence of these assumptions. It would be really dangerous, scientifically speaking, to convey the feeling that "we can bootstrap our way to the truth of the matter" -because, simply, we cannot. So, they remain imperfect solutions to a problem , not an alternative and/or definitely superior way to do things. Therefore, we have first to teach the problem-free situation, then point to the possible problems, and then discuss possible solutions. Otherwise, we would elevate these solutions to a status they don't really have. | {
"source": [
"https://stats.stackexchange.com/questions/130775",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61846/"
]
} |
130,805 | It seems that for K-means and other related algorithms, clustering is based off calculating distance between points. Is there one that works without it? | One example of such a method are Finite Mixture Models (e.g. here or here ) used for clustering. In FMM you consider the distribution ( $f$ ) of your variable $X$ as a mixture of $K$ distributions ( $f_1,...,f_k$ ): $$f(x, \vartheta) = \sum^K_{k=1} \pi_k f_k(x, \vartheta_k)$$ where $\vartheta$ is a vector of parameters $\vartheta = (\pi', \vartheta_1', ..., \vartheta_k')'$ and $\pi_k$ is a proportion of $k$ 'th distribution in the mixture and $\vartheta_k$ is a parameter (or parameters) of $f_k$ distribution. A specific case for discrete data is Latent Class Analysis (e.g. Vermunt and Magidson, 2003 ) defined as: $$P(x, k) = P(k) P(x|k)$$ where $P(k)$ is probability of observing latent class $k$ (i.e. $\pi_k$ ), $P(x)$ is probability of observing an $x$ value and $P(x|k)$ is probability of $x$ being in class $k$ . Usually for both FMM and LCA EM algorithm is used for estimation, but Bayesian approach is also possible, but a little bit more demanding because of problems such as model identification and label switching (e.g. Xi'an's blog ). So there is no distance measure but rather a statistical model defining the structure (distribution) of your data. Because of that other name of this method is "model-based clustering". Check the two books on FMM: McLachlan, G. & Peel, D. (2000). Finite Mixture Models. John Wiley & Sons. Frühwirth-Schnatter, S. (2006). Finite Mixture and Markov Switching Models. Springer. One of the most popular clustering packages that uses FMM is mclust (check here or here ) that is implemented in R . However, more complicated FMM's are also possible, check for example flexmix package and it's documentation . For LCA there is an R poLCA package . | {
"source": [
"https://stats.stackexchange.com/questions/130805",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/53475/"
]
} |
130,867 | What are the differences between "inference" and "estimation" under the context of machine learning ? As a newbie, I feel that we infer random variables and estimate the model parameters. Is my this understanding right? If not, what are the differences exactly, and when should I use which? Also, which one is the synonym of "learn"? | Statistical inference is made of the whole collection of conclusions one can draw from a given dataset and an associated hypothetical model, including the fit of the said model. To quote from Wikipedia , Inference is the act or process of deriving logical conclusions from premises known or assumed to be true. and, Statistical inference uses mathematics to draw conclusions in the presence of uncertainty. Estimation is but one aspect of inference where one substitutes unknown parameters (associated with the hypothetical model that generated the data) with optimal solutions based on the data (and possibly prior information about those parameters). It should always be associated with an evaluation of the uncertainty of the reported estimates, evaluation that is an integral part of inference. Maximum likelihood is one instance of estimation, but it does not cover the whole of inference. On the opposite, Bayesian analysis offers a complete inference machine. | {
"source": [
"https://stats.stackexchange.com/questions/130867",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26881/"
]
} |
131,138 | I was reading about kernel PCA ( 1 , 2 , 3 ) with Gaussian and polynomial kernels. How does the Gaussian kernel separate seemingly any sort of nonlinear data exceptionally well? Please give an intuitive analysis, as well as a mathematically involved one if possible. What is a property of the Gaussian kernel (with ideal $\sigma$) that other kernels don't have? Neural networks, SVMs, and RBF networks come to mind. Why don't we put the norm through, say, a Cauchy PDF and expect the same results? | I think the key to the magic is smoothness. My long answer which follows
is simply to explain about this smoothness. It may or may not be an answer you expect. Short answer: Given a positive definite kernel $k$, there exists its corresponding
space of functions $\mathcal{H}$. Properties of functions are determined
by the kernel. It turns out that if $k$ is a Gaussian kernel, the
functions in $\mathcal{H}$ are very smooth. So, a learned function
(e.g, a regression function, principal components in RKHS as in kernel
PCA) is very smooth. Usually smoothness assumption is sensible for
most datasets we want to tackle. This explains why a Gaussian kernel
is magical. Long answer for why a Gaussian kernel gives smooth functions: A positive definite kernel $k(x,y)$ defines (implicitly) an inner
product $k(x,y)=\left\langle \phi(x),\phi(y)\right\rangle _{\mathcal{H}}$
for feature vector $\phi(x)$ constructed from your input $x$, and
$\mathcal{H}$ is a Hilbert space. The notation $\left\langle \phi(x),\phi(y)\right\rangle $
means an inner product between $\phi(x)$ and $\phi(y)$. For our purpose,
you can imagine $\mathcal{H}$ to be the usual Euclidean space but
possibly with inifinite number of dimensions. Imagine the usual vector
that is infinitely long like $\phi(x)=\left(\phi_{1}(x),\phi_{2}(x),\ldots\right)$.
In kernel methods, $\mathcal{H}$ is a space of functions called reproducing
kernel Hilbert space (RKHS). This space has a special property called
``reproducing property'' which is that $f(x)=\left\langle f,\phi(x)\right\rangle $.
This says that to evaluate $f(x)$, first you construct a feature
vector (infinitely long as mentioned) for $f$. Then you construct
your feature vector for $x$ denoted by $\phi(x)$ (infinitely long).
The evaluation of $f(x)$ is given by taking an inner product of the
two. Obviously, in practice, no one will construct an infinitely long vector. Since we only care about its inner product, we just directly evaluate the kernel $k$. Bypassing the computation of explicit features and directly computing its inner product is known as the "kernel trick". What are the features ? I kept saying features $\phi_{1}(x),\phi_{2}(x),\ldots$ without specifying
what they are. Given a kernel $k$, the features are not unique. But
$\left\langle \phi(x),\phi(y)\right\rangle $ is uniquely determined.
To explain smoothness of the functions, let us consider Fourier features.
Assume a translation invariant kernel $k$, meaning $k(x,y)=k(x-y)$
i.e., the kernel only depends on the difference of the two arguments.
Gaussian kernel has this property. Let $\hat{k}$ denote the Fourier
transform of $k$. In this Fourier viewpoint, the features of $f$
are given by $f:=\left(\cdots,\hat{f}_{l}/\sqrt{\hat{k}_{l}},\cdots\right)$.
This is saying that the feature representation of your function $f$
is given by its Fourier transform divided by the Fourer transform
of the kernel $k$. The feature representation of $x$, which is $\phi(x)$
is $\left(\cdots,\sqrt{\hat{k}_{l}}\exp\left(-ilx\right),\cdots\right)$
where $i=\sqrt{-1}$. One can show that the reproducing property holds
(an exercise to readers). As in any Hilbert space, all elements belonging to the space must
have a finite norm. Let us consider the squared norm of an $f\in\mathcal{H}$: $
\|f\|_{\mathcal{H}}^{2}=\left\langle f,f\right\rangle _{\mathcal{H}}=\sum_{l=-\infty}^{\infty}\frac{\hat{f}_{l}^{2}}{\hat{k}_{l}}.
$ So when is this norm finite i.e., $f$ belongs to the space ? It is
when $\hat{f}_{l}^{2}$ drops faster than $\hat{k}_{l}$ so that the
sum converges. Now, the Fourier transform of a Gaussian kernel $k(x,y)=\exp\left(-\frac{\|x-y\|^{2}}{\sigma^{2}}\right)$ is another Gaussian where $\hat{k}_{l}$ decreases exponentially fast
with $l$. So if $f$ is to be in this space, its Fourier transform
must drop even faster than that of $k$. This means the function will
have effectively only a few low frequency components with high weights.
A signal with only low frequency components does not ``wiggle''
much. This explains why a Gaussian kernel gives you a smooth function. Extra: What about a Laplace kernel ? If you consider a Laplace kernel $k(x,y)=\exp\left(-\frac{\|x-y\|}{\sigma}\right)$, its Fourier transform is a Cauchy distribution which drops much slower than the exponential function in the Fourier
transform of a Gaussian kernel. This means a function $f$ will have
more high-frequency components. As a result, the function given by
a Laplace kernel is ``rougher'' than that given by a Gaussian kernel. What is a property of the Gaussian kernel that other kernels do not have ? Regardless of the Gaussian width, one property is that Gaussian kernel is ``universal''. Intuitively,
this means, given a bounded continuous function $g$ (arbitrary),
there exists a function $f\in\mathcal{H}$ such that $f$ and $g$
are close (in the sense of $\|\cdot\|_{\infty})$ up to arbitrary
precision needed. Basically, this means Gaussian kernel gives functions which can approximate "nice" (bounded, continuous) functions arbitrarily well. Gaussian and Laplace kernels are universal. A polynomial kernel, for
example, is not. Why don't we put the norm through, say, a Cauchy PDF and expect the
same results? In general, you can do anything you like as long as the resulting
$k$ is positive definite. Positive definiteness is defined as $\sum_{i=1}^{N}\sum_{j=1}^{N}k(x_{i},x_{j})\alpha_{i}\alpha_{j}>0$
for all $\alpha_{i}\in\mathbb{R}$, $\{x_{i}\}_{i=1}^{N}$ and all
$N\in\mathbb{N}$ (set of natural numbers). If $k$ is not positive
definite, then it does not correspond to an inner product space. All
the analysis breaks because you do not even have a space of functions
$\mathcal{H}$ as mentioned. Nonetheless, it may work empirically. For example, the hyperbolic tangent kernel (see number 7 on this page ) $k(x,y) = tanh(\alpha x^\top y + c)$ which is intended to imitate sigmoid activation units in neural networks, is only positive definite for some settings of $\alpha$ and $c$. Still it was reported that it works in practice. What about other kinds of features ? I said features are not unique. For Gaussian kernel, another set of features is given by Mercer expansion . See Section 4.3.1 of the famous Gaussian process book . In this case, the features $\phi(x)$ are Hermite polynomials evaluated at $x$. | {
"source": [
"https://stats.stackexchange.com/questions/131138",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35280/"
]
} |
131,255 | This is a question in general, not specific to any method or data set. How do we deal with a class imbalance problem in Supervised Machine learning where the number of 0 is around 90% and number of 1 is around 10% in your dataset.How do we optimally train the classifier. One of the ways which I follow is sampling to make the dataset balanced and then train the classifier and repeat this for multiple samples. I feel this is random, Is there any framework to approach these kind of problems. | There are many frameworks and approaches. This is a recurrent issue. Examples: Undersampling . Select a subsample of the sets of zeros such that it's size matches the set of ones. There is an obvious loss of information, unless you use a more complex framework (for a instance, I would split the first set on 9 smaller, mutually exclusive subsets, train a model on each one of them and ensemble the models). Oversampling . Produce artificial ones until the proportion is 50%/50%. My previous employer used this by default. There are many frameworks for this (I think SMOTE is the most popular, but I prefer simpler tricks like Noisy PCA ). One Class Learning . Just assume your data has a few real points (the ones) and lots of random noise that doesn't physically exists leaked into the dataset (anything that is not a one is noise). Use an algorithm to denoise the data instead of a classification algorithm. Cost-Sensitive Training . Use a asymmetric cost function to artificially balance the training process. Some lit reviews, in increasing order of technical complexity\level of details: On the Classification of Imbalanced Datasets ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets (read at least the editorial , it will be enlightening) Data Mining for Imbalanced Datasets: An Overview Oh, and by the way, 90%/10% is not unbalanced. Card transaction fraud datasets often are split 99.97%/0.03%. This is unbalanced. | {
"source": [
"https://stats.stackexchange.com/questions/131255",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29600/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.