source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
63,441 | My statistics program implements both the Benjamini & Hochberg (1995) and Benjamini & Yekutieli (2001) false discovery rate (FDR) procedures. I have done my best to read through the later paper, but it is quite mathematically dense and I am not reasonably certain I understand the difference between the procedures. I can see from the underlying code in my statistics program that they are indeed different and that the latter includes a quantity q that I have seen referred to in regards to FDR, but also don't quite have a grasp of. Is there any reason to prefer the Benjamini & Hochberg (1995) procedure versus the Benjamini & Yekutieli (2001) procedure? Do they have different assumptions? What are the practical differences between these approaches? Benjamini, Y., and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series B, 57, 289–300. Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics 29, 1165–1188. The 1999 paper referenced in the comments below: Yekutieli, D., & Benjamini, Y. (1999). Resampling-based false discovery rate controlling multiple test procedures for correlated test statistics. Journal of Statistical Planning and Inference, 82(1), 171-196. | Benjamini and Hochberg (1995) introduced the false discovery rate. Benjamini and Yekutieli (2001) proved that the estimator is valid under some forms of dependence. Dependence can arise as follows. Consider the continuous variable used in a t-test and another variable correlated with it; for example, testing if BMI differs in two groups and if waist circumference differs in these two groups. Because these variables are correlated, the resulting p-values will also be correlated. Yekutieli and Benjamini (1999) developed another FDR controlling procedure, which can be used under general dependence by resampling the null distribution. Because the comparison is with respect to the null permutation distribution, as the total number of true positives increases, the method becomes more conservative. It turns out that BH 1995 is also conservative as the number of true positives increases. To improve this, Benjamini and Hochberg (2000) introduced the adaptive FDR procedure. This required estimation of a parameter, the null proportion, which is also used in Storey's pFDR estimator. Storey gives comparisons and argues that his method is more powerful and emphasizes the conservative nature of 1995 procedure. Storey also has results and simulations under dependence. All of the above tests are valid under independence. The question is what kind of departure from independence can these estimates deal with. My current thinking is that if you don't expect too many true positives the BY (1999) procedure is nice because it incorporates distributional features and dependence. However, I'm unaware of an implementation. Storey's method was designed for many true positives with some dependence. BH 1995 offers an alternative to the family-wise error rate and it is still conservative. Benjamini, Y and Y Hochberg. On the Adaptive Control of the False Discovery Rate in Multiple Testing with Independent Statistics. Journal of Educational and Behavioral Statistics, 2000. | {
"source": [
"https://stats.stackexchange.com/questions/63441",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/196/"
]
} |
63,647 | What are the maximum-likelihood estimators for the parameters of Student's t-distribution? Do they exist in closed form? A quick Google search didn't give me any results. Today I am interested in the univariate case, but probably I will have to extend the model to multiple dimensions. EDIT: I am actually mostly interested in the location and scale parameters. For now I can assume that the degrees of freedom parameter is fixed, and possibly use some numeric scheme to find the optimal value later. | Closed form does not exist for T, but a very intuitive and stable approach is via the EM algorithm. Now because student is a scale mixture of normals, you can write your model as $$y_i=\mu+e_i$$ where $e_i|\sigma,w_i \sim N(0,\sigma^2w_i^{-1})$ and $w_i\sim Ga(\frac{\nu}{2}, \frac{\nu}{2})$. This means that conditionally on $w_i$ the mle are just the weighted mean and standard deviation. This is the "M"step $$\hat{\mu}=\frac{\sum_iw_iy_i}{ \sum_iw_i}$$
$$\hat{\sigma}^2= \frac{\sum_iw_i(y_i-\hat{\mu})^2}{n}$$ Now the "E" step replaces $w_i$ with its expectation given all the data. This is given as: $$\hat{w}_i=\frac{(\nu+1) \sigma^2 }{\nu \sigma^2 +(y_i-\mu)^2}$$ so you simply iterate the above two steps, replacing the "right hand side" of each equation with the current parameter estimates. This very easily shows the robustness properties of the t distribution as observations with large residuals receive less weight in the calculation for the location $\mu$, and bounded influence in the calculation of $\sigma^2$. By "bounded influence" I mean that the contribution to the estimate for $\sigma^2$ from the ith observation cannot exceed a given threshold (this is $(\nu+1)\sigma^2_{old}$ in the EM algorithm). Also $\nu$ is a "robustness"parameter in that increasing (decreasing) $\nu$ will result in more (less) uniform weights and hence more (less) sensitivity to outliers. One thing to note is that the log likelihood function may have more than one stationary point, so the EM algorithm may converge to a local mode instead of a global mode. The local modes are likely to be found when the location parameter is started too close to an outlier. So starting at the median is a good way to avoid this. | {
"source": [
"https://stats.stackexchange.com/questions/63647",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8287/"
]
} |
63,652 | I've been looking into the boot package in R and while I have found a number of good primers on how to use it, I have yet to find anything that describes exactly what is happening "behind the scenes". For instance, in this example , the guide shows how to use standard regression coefficients as a starting point for a bootstrap regression but doesn't explain what the bootstrap procedure is actually doing to derive the bootstrap regression coefficients. It appears there is some sort of iterative process that is happening but I can't seem to figure out exactly what is going on. | There are several "flavours" or forms of the bootstrap (e.g. non-parametric, parametric, residual resampling and many more). The bootstrap in the example is called a non-parametric bootstrap , or case resampling (see here , here , here and here for applications in regression). The basic idea is that you treat your sample as population and repeatedly draw new samples from it with replacement . All original observations have equal probability of being drawn into the new sample. Then you calculate and store the statistic(s) of interest, this may be the mean, the median or regression coefficients using the newly drawn sample . This is repeated $n$ times. In each iteration, some observations from your original sample are drawn multiple times while some observations may not be drawn at all. After $n$ iterations, you have $n$ stored bootstrap estimates of the statistic(s) of interest (e.g. if $n=1000$ and the statistic of interest is the mean, you have 1000 bootstrapped estimates of the mean). Lastly, summary statistics such as the mean, median and the standard deviation of the $n$ bootstrap-estimates are calculated. Bootstrapping is often used for: Calculation of confidence intervals (and estimation of the standard errors) Estimation of the bias of the point estimates There are several methods for calculating confidence intervals based on the bootstrap samples ( this paper provides explanation and guidance). One very simple method for calculating a 95%-confidence interval is just calculating the empirical 2.5th and 97.5th percentiles of the bootstrap samples (this interval is called the bootstrap percentile interval; see code below). The simple percentile interval method is rarely used in practice as there are better methods, such as the bias-corrected and accelerated bootstrap (BCa). BCa intervals adjust for both bias and skewness in the bootstrap distribution. The bias is simply estimated as the difference between the mean of the $n$ stored bootstrap samples and the original estimate(s). Let's replicate the example from the website but using our own loop incorporating the ideas I've outlined above (drawing repeatedly with replacement): #-----------------------------------------------------------------------------
# Load packages
#-----------------------------------------------------------------------------
require(ggplot2)
require(pscl)
require(MASS)
require(boot)
#-----------------------------------------------------------------------------
# Load data
#-----------------------------------------------------------------------------
zinb <- read.csv("http://www.ats.ucla.edu/stat/data/fish.csv")
zinb <- within(zinb, {
nofish <- factor(nofish)
livebait <- factor(livebait)
camper <- factor(camper)
})
#-----------------------------------------------------------------------------
# Calculate zero-inflated regression
#-----------------------------------------------------------------------------
m1 <- zeroinfl(count ~ child + camper | persons, data = zinb,
dist = "negbin", EM = TRUE)
#-----------------------------------------------------------------------------
# Store the original regression coefficients
#-----------------------------------------------------------------------------
original.estimates <- as.vector(t(do.call(rbind, coef(summary(m1)))[, 1:2]))
#-----------------------------------------------------------------------------
# Set the number of replications
#-----------------------------------------------------------------------------
n.sim <- 2000
#-----------------------------------------------------------------------------
# Set up a matrix to store the results
#-----------------------------------------------------------------------------
store.matrix <- matrix(NA, nrow=n.sim, ncol=12)
#-----------------------------------------------------------------------------
# The loop
#-----------------------------------------------------------------------------
set.seed(123)
for(i in 1:n.sim) {
#-----------------------------------------------------------------------------
# Draw the observations WITH replacement
#-----------------------------------------------------------------------------
data.new <- zinb[sample(1:dim(zinb)[1], dim(zinb)[1], replace=TRUE),]
#-----------------------------------------------------------------------------
# Calculate the model with this "new" data
#-----------------------------------------------------------------------------
m <- zeroinfl(count ~ child + camper | persons,
data = data.new, dist = "negbin",
start = list(count = c(1.3711, -1.5152, 0.879),
zero = c(1.6028, -1.6663)))
#-----------------------------------------------------------------------------
# Store the results
#-----------------------------------------------------------------------------
store.matrix[i, ] <- as.vector(t(do.call(rbind, coef(summary(m)))[, 1:2]))
}
#-----------------------------------------------------------------------------
# Save the means, medians and SDs of the bootstrapped statistics
#-----------------------------------------------------------------------------
boot.means <- colMeans(store.matrix, na.rm=T)
boot.medians <- apply(store.matrix,2,median, na.rm=T)
boot.sds <- apply(store.matrix,2,sd, na.rm=T)
#-----------------------------------------------------------------------------
# The bootstrap bias is the difference between the mean bootstrap estimates
# and the original estimates
#-----------------------------------------------------------------------------
boot.bias <- colMeans(store.matrix, na.rm=T) - original.estimates
#-----------------------------------------------------------------------------
# Basic bootstrap CIs based on the empirical quantiles
#-----------------------------------------------------------------------------
conf.mat <- matrix(apply(store.matrix, 2 ,quantile, c(0.025, 0.975), na.rm=T),
ncol=2, byrow=TRUE)
colnames(conf.mat) <- c("95%-CI Lower", "95%-CI Upper") And here is our summary table: #-----------------------------------------------------------------------------
# Set up summary data frame
#-----------------------------------------------------------------------------
summary.frame <- data.frame(mean=boot.means, median=boot.medians,
sd=boot.sds, bias=boot.bias, "CI_lower"=conf.mat[,1], "CI_upper"=conf.mat[,2])
summary.frame
mean median sd bias CI_lower CI_upper
1 1.2998 1.3013 0.39674 -0.0712912 0.51960 2.0605
2 0.2527 0.2486 0.03208 -0.0034461 0.19898 0.3229
3 -1.5662 -1.5572 0.26220 -0.0509239 -2.12900 -1.0920
4 0.2005 0.1986 0.01949 0.0049019 0.16744 0.2418
5 0.9544 0.9252 0.48915 0.0753405 0.03493 1.9025
6 0.2702 0.2688 0.02043 0.0009583 0.23272 0.3137
7 -0.8997 -0.9082 0.22174 0.0856793 -1.30664 -0.4380
8 0.1789 0.1781 0.01667 0.0029513 0.14494 0.2140
9 2.0683 1.7719 1.59102 0.4654898 0.44150 8.0471
10 4.0209 0.8270 13.23434 3.1845710 0.58114 57.6417
11 -2.0969 -1.6717 1.56311 -0.4306844 -8.43440 -1.1156
12 3.8660 0.6435 13.27525 3.1870642 0.33631 57.6062 Some explanations The difference between the mean of the bootstrap estimates and the original estimates is what is called "bias" in the output of boot What the output of boot calls "std. error" is the standard deviation of the bootstrapped estimates Compare it with the output from boot : #-----------------------------------------------------------------------------
# Compare with boot output and confidence intervals
#-----------------------------------------------------------------------------
set.seed(10)
res <- boot(zinb, f, R = 2000, parallel = "snow", ncpus = 4)
res
Bootstrap Statistics :
original bias std. error
t1* 1.3710504 -0.076735010 0.39842905
t2* 0.2561136 -0.003127401 0.03172301
t3* -1.5152609 -0.064110745 0.26554358
t4* 0.1955916 0.005819378 0.01933571
t5* 0.8790522 0.083866901 0.49476780
t6* 0.2692734 0.001475496 0.01957823
t7* -0.9853566 0.083186595 0.22384444
t8* 0.1759504 0.002507872 0.01648298
t9* 1.6031354 0.482973831 1.58603356
t10* 0.8365225 3.240981223 13.86307093
t11* -1.6665917 -0.453059768 1.55143344
t12* 0.6793077 3.247826469 13.90167954
perc.cis <- matrix(NA, nrow=dim(res$t)[2], ncol=2)
for( i in 1:dim(res$t)[2] ) {
perc.cis[i,] <- boot.ci(res, conf=0.95, type="perc", index=i)$percent[4:5]
}
colnames(perc.cis) <- c("95%-CI Lower", "95%-CI Upper")
perc.cis
95%-CI Lower 95%-CI Upper
[1,] 0.52240 2.1035
[2,] 0.19984 0.3220
[3,] -2.12820 -1.1012
[4,] 0.16754 0.2430
[5,] 0.04817 1.9084
[6,] 0.23401 0.3124
[7,] -1.29964 -0.4314
[8,] 0.14517 0.2149
[9,] 0.29993 8.0463
[10,] 0.57248 56.6710
[11,] -8.64798 -1.1088
[12,] 0.33048 56.6702
#-----------------------------------------------------------------------------
# Our summary table
#-----------------------------------------------------------------------------
summary.frame
mean median sd bias CI_lower CI_upper
1 1.2998 1.3013 0.39674 -0.0712912 0.51960 2.0605
2 0.2527 0.2486 0.03208 -0.0034461 0.19898 0.3229
3 -1.5662 -1.5572 0.26220 -0.0509239 -2.12900 -1.0920
4 0.2005 0.1986 0.01949 0.0049019 0.16744 0.2418
5 0.9544 0.9252 0.48915 0.0753405 0.03493 1.9025
6 0.2702 0.2688 0.02043 0.0009583 0.23272 0.3137
7 -0.8997 -0.9082 0.22174 0.0856793 -1.30664 -0.4380
8 0.1789 0.1781 0.01667 0.0029513 0.14494 0.2140
9 2.0683 1.7719 1.59102 0.4654898 0.44150 8.0471
10 4.0209 0.8270 13.23434 3.1845710 0.58114 57.6417
11 -2.0969 -1.6717 1.56311 -0.4306844 -8.43440 -1.1156
12 3.8660 0.6435 13.27525 3.1870642 0.33631 57.6062 Compare the "bias" columns and the "std. error" with the "sd" column of our own summary table. Our 95%-confidence intervals are very similar to the confidence intervals calculated by boot.ci using the percentile method (not all though: look at the lower limit of parameter with index 9). | {
"source": [
"https://stats.stackexchange.com/questions/63652",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26640/"
]
} |
63,978 | Almost everything I read about linear regression and GLM boils down to this: $y = f(x,\beta)$ where $f(x,\beta)$ is a non-increasing or non-decreasing function of $x$ and $\beta$ is the parameter you estimate and test hypotheses about. There are dozens of link functions and transformations of $y$ and $x$ to make $y$ a linear function of $f(x,\beta)$. Now, if you remove the non-increasing/non-decreasing requirement for $f(x,\beta)$, I know of only two choices for fitting a parametric linearized model: trig functions and polynomials. Both create artificial dependence between each predicted $y$ and the entire set of $X$, making them a very non-robust fit unless there are prior reasons to believe that your data actually are generated by a cyclical or polynomial process. This is not some kind of esoteric edge case. It's the actual, common-sense relationship between water and crop yields (once the plots are deep enough under water, crop yields will start diminishing), or between calories consumed at breakfast and performance on a math quiz, or number of workers in a factory and the number of widgets they produce... in short, almost any real life case for which linear models are used but with the data covering a wide enough range that you go past diminishing returns into negative returns. I tried looking for the terms 'concave', 'convex', 'curvilinear', 'non-monotonic', 'bathtub', and I forget how many others. Few relevant questions and even fewer usable answers. So, in practical terms, if you had the following data (R code, y is a function of continuous variable x and discrete variable group): updown<-data.frame(y=c(46.98,38.39,44.21,46.28,41.67,41.8,44.8,45.22,43.89,45.71,46.09,45.46,40.54,44.94,42.3,43.01,45.17,44.94,36.27,43.07,41.85,40.5,41.14,43.45,33.52,30.39,27.92,19.67,43.64,43.39,42.07,41.66,43.25,42.79,44.11,40.27,40.35,44.34,40.31,49.88,46.49,43.93,50.87,45.2,43.04,42.18,44.97,44.69,44.58,33.72,44.76,41.55,34.46,32.89,20.24,22,17.34,20.14,20.36,24.39,22.05,24.21,26.11,28.48,29.09,31.98,32.97,31.32,40.44,33.82,34.46,42.7,43.03,41.07,41.02,42.85,44.5,44.15,52.58,47.72,44.1,21.49,19.39,26.59,29.38,25.64,28.06,29.23,31.15,34.81,34.25,36,42.91,38.58,42.65,45.33,47.34,50.48,49.2,55.67,54.65,58.04,59.54,65.81,61.43,67.48,69.5,69.72,67.95,67.25,66.56,70.69,70.15,71.08,67.6,71.07,72.73,72.73,81.24,73.37,72.67,74.96,76.34,73.65,76.44,72.09,67.62,70.24,69.85,63.68,64.14,52.91,57.11,48.54,56.29,47.54,19.53,20.92,22.76,29.34,21.34,26.77,29.72,34.36,34.8,33.63,37.56,42.01,40.77,44.74,40.72,46.43,46.26,46.42,51.55,49.78,52.12,60.3,58.17,57,65.81,72.92,72.94,71.56,66.63,68.3,72.44,75.09,73.97,68.34,73.07,74.25,74.12,75.6,73.66,72.63,73.86,76.26,74.59,74.42,74.2,65,64.72,66.98,64.27,59.77,56.36,57.24,48.72,53.09,46.53),
x=c(216.37,226.13,237.03,255.17,270.86,287.45,300.52,314.44,325.61,341.12,354.88,365.68,379.77,393.5,410.02,420.88,436.31,450.84,466.95,477,491.89,509.27,521.86,531.53,548.11,563.43,575.43,590.34,213.33,228.99,240.07,250.4,269.75,283.33,294.67,310.44,325.36,340.48,355.66,370.43,377.58,394.32,413.22,428.23,436.41,455.58,465.63,475.51,493.44,505.4,521.42,536.82,550.57,563.17,575.2,592.27,86.15,91.09,97.83,103.39,107.37,114.78,119.9,124.39,131.63,134.49,142.83,147.26,152.2,160.9,163.75,172.29,173.62,179.3,184.82,191.46,197.53,201.89,204.71,214.12,215.06,88.34,109.18,122.12,133.19,148.02,158.72,172.93,189.23,204.04,219.36,229.58,247.49,258.23,273.3,292.69,300.47,314.36,325.65,345.21,356.19,367.29,389.87,397.74,411.46,423.04,444.23,452.41,465.43,484.51,497.33,507.98,522.96,537.37,553.79,566.08,581.91,595.84,610.7,624.04,637.53,649.98,663.43,681.67,698.1,709.79,718.33,734.81,751.93,761.37,775.12,790.15,803.39,818.64,833.71,847.81,88.09,105.72,123.35,132.19,151.87,161.5,177.34,186.92,201.35,216.09,230.12,245.47,255.85,273.45,285.91,303.99,315.98,325.48,343.01,360.05,373.17,381.7,398.41,412.66,423.66,443.67,450.39,468.86,483.93,499.91,511.59,529.34,541.35,550.28,568.31,584.7,592.33,615.74,622.45,639.1,651.41,668.08,679.75,692.94,708.83,720.98,734.42,747.83,762.27,778.74,790.97,806.99,820.03,831.55,844.23),
group=factor(rep(c('A','B'),c(81,110))));
plot(y~x,updown,subset=x<500,col=group); You might first try a Box-Cox transformation and see if it made mechanistic sense, and failing that, you might fit a nonlinear least squares model with a logistic or asymptotic link function. So, why should you give up parametric models completely and fall back on a black-box method like splines when you find out that the full dataset looks like this... plot(y~x,updown,col=group); My questions are: What terms should I search for in order to either find link functions that represent this class of functional relationships? or What should I read and/or search for in order to teach myself how to design link functions to this class of functional relationships or extend existing ones that currently are only for monotonic responses? or Heck, even what StackExchange tag is most appropriate for this type of question! | The remarks in the question about link functions and monotonicity are a red herring. Underlying them seems to be an implicit assumption that a generalized linear model (GLM), by expressing the expectation of a response $Y$ as a monotonic function $f$ of a linear combination $X\beta$ of explanatory variables $X$ , is not flexible enough to account for non-monotonic responses. That's just not so. Perhaps a worked example will illuminate this point. In a 1948 study (published posthumously in 1977 and never peer-reviewed), J. Tolkien reported the results of a plant watering experiment in which 13 groups of 24 sunflowers ( Helianthus Gondorensis ) were given controlled amounts of water starting at germination through three months of growth. The total amounts applied varied from one inch to 25 inches in two-inch increments. There is a clear positive response to the watering and a strong negative response to over-watering. Earlier work, based on hypothetical kinetic models of ion transport, had hypothesized that two competing mechanisms might account for this behavior: one resulted in a linear response to small amounts of water (as measured in the log odds of survival), while the other--an inhibiting factor--acted exponentially (which is a strongly non-linear effect). With large amounts of water, the inhibiting factor would overwhelm the positive effects of the water and appreciably increase mortality. Let $\kappa$ be the (unknown) inhibition rate (per unit amount of water). This model asserts that the number $Y$ of survivors in a group of size $n$ receiving $x$ inches of water should have a $$\text{Binomial}\left(n, f(\beta_0 + \beta_1 x - \beta_2 \exp(\kappa x))\right)$$ distribution, where $f$ is the link function converting log odds back to a probability. This is a binomial GLM. As such, although it is manifestly nonlinear in $x$ , given any value of $\kappa$ it is linear in its parameters $\beta_0$ , $\beta_1$ , and $\beta_2$ . "Linearity" in the GLM setting has to be understood in the sense that $f^{-1}\left(\mathbb{E}[Y]\right)$ is a linear combination of these parameters whose coefficients are known for each $x$ . And they are: they equal $1$ (the coefficient of $\beta_0$ ), $x$ itself (the coefficient of $\beta_1$ ), and $-\exp(\kappa x)$ (the coefficient of $\beta_2$ ). This model--although it is somewhat novel and not completely linear in its parameters--can be fit using standard software by maximizing the likelihood for arbitrary $\kappa$ and selecting the $\kappa$ for which this maximum is largest. Here is R code to do so, beginning with the data: water <- seq(1, 25, length.out=13)
n.survived <- c(0, 3, 4, 12, 18, 21, 23, 24, 22, 23, 18, 3, 2)
pop <- 24
counts <- cbind(n.survived, n.died=pop-n.survived)
f <- function(k) {
fit <- glm(counts ~ water + I(-exp(water * k)), family=binomial)
list(AIC=AIC(fit), fit=fit)
}
k.est <- optim(0.1, function(k) f(k) $AIC, method="Brent", lower=0, upper=1)$ par
fit <- f(k.est)$fit There are no technical difficulties; the calculation takes only 1/30 second. The blue curve is the fitted expectation of the response, $\mathbb{E}[Y]$ . Obviously (a) the fit is good and (b) it predicts a non-monotonic relationship between $\mathbb{E}[Y]$ and $x$ (an upside-down "bathtub" curve). To make this perfectly clear, here is the follow-up code in R used to compute and plot the fit: x.0 <- seq(min(water), max(water), length.out=100)
p.0 <- cbind(rep(1, length(x.0)), x.0, -exp(k.est * x.0))
logistic <- function(x) 1 - 1/(1 + exp(x))
predicted <- pop * logistic(p.0 %*% coef(fit))
plot(water, n.survived / pop, main="Data and Fit",
xlab="Total water (inches)",
ylab="Proportion surviving at 3 months")
lines(x.0, predicted / pop, col="#a0a0ff", lwd=2) The answers to the questions are: What terms should I search for in order to either find link functions that represent this class of functional relationships? None : that is not the purpose of the link function. What should I ... search for in order to ... extend existing [link functions] that currently are only for monotonic responses? Nothing : this is based on a misunderstanding of how responses are modeled. Evidently, one should first focus on what explanatory variables to use or construct when building a regression model. As suggested in this example, look for guidance from past experience and theory. | {
"source": [
"https://stats.stackexchange.com/questions/63978",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4829/"
]
} |
64,026 | In this comment , Nick Cox wrote: Binning into classes is an ancient method. While histograms can be useful, modern statistical software makes it easy as well as advisable to fit distributions to the raw data. Binning just throws away detail that is crucial in determining which distributions are plausible. The context of this comment suggests using QQ-plots as an alternative means to evaluate the fit. The statement sounds very plausible, but I'd like to know about a reliable reference supporting this statement. Is there some paper which does a more thorough investigation of this fact, beyond a simple “well, this sounds obvious”? Any actual systematic comparisons of results or the likes? I'd also like to see how far this benefit of QQ-plots over histograms can be stretched, to applications other than model fitting. Answers on this question agree that “a QQ-plot […] just tells you that "something is wrong"”. I am thinking about using them as a tool to identify structure in observed data as compared to a null model and wonder whether there exist any established procedures to use QQ-plots (or their underlying data) to not only detect but also describe non-random structure in the observed data. References which include this direction would therefore be particularly useful. | The canonical paper here was: Wilk, M.B. and R. Gnanadesikan. 1968. Probability plotting methods for the analysis of data. Biometrika 55: 1-17 and it still repays close and repeated reading. A lucid treatment with many good examples was given by: Cleveland, W.S. 1993. Visualizing Data. Summit, NJ: Hobart Press. and it is worth mentioning the more introductory: Cleveland, W.S. 1994. The Elements of Graphing Data. Summit, NJ: Hobart Press. Other texts containing reasonable exposure to this approach include: Davison, A.C. 2003. Statistical Models. Cambridge: Cambridge University Press. Rice, J.A. 2007. Mathematical Statistics and Data Analysis. Belmont, CA: Duxbury. That aside, I don't know of anything that is quite what you ask. Once you have seen the point of quantile-quantile plots, showing in detail that histograms are a second-rate alternative seems neither interesting nor useful, too much like shooting fish in a barrel. But I would summarize like this: Binning suppresses details, and the details are often important. This can apply not only to exactly what is going on in the tails but also to what is going on in the middle. For example, granularity or multimodality may be important as well as skewness or tail weight. Binning requires decisions about bin origin and bin width, which can affect the appearance of histograms mightily, so it is hard to see what is real and what is a side-effect of choices. If your software makes these decisions for you, the problems remain. (For example, default bin choices are often designed so that you do not use "too many bins", i.e. with the motive of smoothing a little.) The graphical and psychological problem of comparing two histograms is trickier than that of judging the fit of a set of points to a straight line. [Added 27 Sept 2017] 4. Quantile plots can be varied very easily when considering one or more transformed scales. By transformation here I mean a nonlinear transformation, not e.g. scaling by a maximum or standardisation by (value $-$ mean) / SD. If the quantiles are just the order statistics, then all you need to do is to apply the transformation, as e.g. the logarithm of the maximum is identically the maximum of the logarithms, and so forth. (Trivially, reciprocation reverses order.) Even if you plot selected quantiles that are based on two order statistics, usually they are just interpolated between two original data values and the effect of the interpolation is usually minor. In contrast, histograms on log or other transformed scales require a fresh decision on bin origin and width that isn't especially difficult, but it can be awkward. Much the same can be said of density estimation as a way to summarize the distribution. Naturally, whatever transformation you apply must make sense for the data, so that logarithms can only usefully be applied for a positive variable. | {
"source": [
"https://stats.stackexchange.com/questions/64026",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/15959/"
]
} |
64,031 | Has anyone any idea how one could distinguish time series according to certain properties?
The only time series properties I know are stationarity/nonstationarity and homoskedasticity/heteroskedasticity. But are there any other possibilities to distinguish time series? | The canonical paper here was: Wilk, M.B. and R. Gnanadesikan. 1968. Probability plotting methods for the analysis of data. Biometrika 55: 1-17 and it still repays close and repeated reading. A lucid treatment with many good examples was given by: Cleveland, W.S. 1993. Visualizing Data. Summit, NJ: Hobart Press. and it is worth mentioning the more introductory: Cleveland, W.S. 1994. The Elements of Graphing Data. Summit, NJ: Hobart Press. Other texts containing reasonable exposure to this approach include: Davison, A.C. 2003. Statistical Models. Cambridge: Cambridge University Press. Rice, J.A. 2007. Mathematical Statistics and Data Analysis. Belmont, CA: Duxbury. That aside, I don't know of anything that is quite what you ask. Once you have seen the point of quantile-quantile plots, showing in detail that histograms are a second-rate alternative seems neither interesting nor useful, too much like shooting fish in a barrel. But I would summarize like this: Binning suppresses details, and the details are often important. This can apply not only to exactly what is going on in the tails but also to what is going on in the middle. For example, granularity or multimodality may be important as well as skewness or tail weight. Binning requires decisions about bin origin and bin width, which can affect the appearance of histograms mightily, so it is hard to see what is real and what is a side-effect of choices. If your software makes these decisions for you, the problems remain. (For example, default bin choices are often designed so that you do not use "too many bins", i.e. with the motive of smoothing a little.) The graphical and psychological problem of comparing two histograms is trickier than that of judging the fit of a set of points to a straight line. [Added 27 Sept 2017] 4. Quantile plots can be varied very easily when considering one or more transformed scales. By transformation here I mean a nonlinear transformation, not e.g. scaling by a maximum or standardisation by (value $-$ mean) / SD. If the quantiles are just the order statistics, then all you need to do is to apply the transformation, as e.g. the logarithm of the maximum is identically the maximum of the logarithms, and so forth. (Trivially, reciprocation reverses order.) Even if you plot selected quantiles that are based on two order statistics, usually they are just interpolated between two original data values and the effect of the interpolation is usually minor. In contrast, histograms on log or other transformed scales require a fresh decision on bin origin and width that isn't especially difficult, but it can be awkward. Much the same can be said of density estimation as a way to summarize the distribution. Naturally, whatever transformation you apply must make sense for the data, so that logarithms can only usefully be applied for a positive variable. | {
"source": [
"https://stats.stackexchange.com/questions/64031",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25944/"
]
} |
64,147 | My understanding is that with cross validation and model selection we try to address two things: P1 . Estimate the expected loss on the population when training with our sample P2 . Measure and report our uncertainty of this estimation (variance, confidence intervals, bias, etc.) Standard practice seems to be to do repeated cross validation, since this reduces the variance of our estimator. However, when it comes to reporting and analysis, my understanding is that internal validation is better than external validation because: It is better to report: The statistics of our estimator, e.g. its confidence interval, variance, mean, etc. on the full sample (in this case the CV sample). than reporting: The loss of our estimator on a hold-out subset of the original sample, since: (i) This would be a single measurement ( even if we pick our estimator with CV ) (ii) Our estimator for this single measurement would have been trained on a set (e.g. the CV set) that is smaller than our initial sample since we have to make room for the hold-out set. This results in a more biased (pessimistic) estimation in P1 . Is this correct? If not why? Background: It is easy to find textbooks that recommend dividing your sample into two sets: The CV set, which is subsequently and repeatedly divided into train and validation sets. The hold-out (test) set, only used at the end to report the estimator performance My question is an attempt to understand the merits and advantages of this textbook approach, considering that our goal is to really address the problems P1 and P2 at the beginning of this post. It looks to me that reporting on the hold-out test set is bad practice since the analysis of the CV sample is more informative. Nested K-fold vs repeated K-fold: One can in principle combine hold-out with regular K-fold to obtain nested K-fold . This would allow us to measure the variability of our estimator, but it looks to me that for the same number of total models trained (total # of folds) repeated K-fold would yield estimators that are less biased and more accurate than nested K-fold. To see this: Repeated K-fold uses a larger fraction of our total sample than nested K-fold for the same K (i.e. it leads to lower bias) 100 iterations would only give 10 measurements of our estimator in nested K-fold (K=10), but 100 measurements in K-fold (more measurements leads to lower variance in P2 ) What's wrong with this reasoning? | Let me add a few points to the nice answers that are already here: Nested K-fold vs repeated K-fold: nested and repeated k-fold are totally different things, used for different purposes. As you already know , nested is good if you want to use the inner cv for model selection. repeated: IMHO you should always repeat the k-fold cv [see below]. I therefore recommend to repeat any nested k-fold cross validation . Better report "The statistics of our estimator, e.g. its confidence interval, variance, mean, etc. on the full sample (in this case the CV sample)." : Sure. However, you need to be aware of the fact that you will not (easily) be able to estimate the confidence interval by the cross validation results alone. The reason is that, however much you resample, the actual number of cases you look at is finite (and usually rather small - otherwise you'd not bother about these distinctions). See e.g. Bengio, Y. and Grandvalet, Y.: No Unbiased Estimator of the Variance of K-Fold Cross-Validation Journal of Machine Learning Research, 2004, 5, 1089-1105 . However, in some situations you can nevertheless make estimations of the variance:
With repeated k-fold cross validation, you can get an idea whether model instability does play a role. And this instability-related variance is actually the part of the variance that you can reduce by repeated cross-validation.
(If your models are perfectly stable, each repetition/iteration of the cross validation will have exactly the same predictions for each case. However, you still have variance due to the actual choice/composition of your data set). So there is a limit to the lower variance of repeated k-fold cross validation. Doing more and more repetitions/iterations does not make sense, as the variance caused by the fact that in the end only $n$ real cases were tested is not affected. The variance caused by the fact that in the end only $n$ real cases were tested can be estimated for some special cases, e.g. the performance of classifiers as measured by proportions such as hit rate, error rate, sensitivity, specificity, predictive values and so on: they follow binomial distributions
Unfortunately, this means that they have huge variance $\sigma^2 (\hat p) = \frac{1}{n} p (1 - p)$ with $p$ the true performance value of the model, $\hat p$ the observed, and $n$ the sample size in the denominator of the fraction. This has the maximum for $p = 0.5$. You can also calculate confidence intervals starting from the observation.
(@Frank Harrell will comment that these are no proper scoring rules, so you anyways shouldn't use them - which is related to the huge variance). However, IMHO they are useful for deriving conservative bounds (there are better scoring rules, and the bad behaviour of these fractions is a worst-case limit for the better rules), see e.g. C. Beleites, R. Salzer and V. Sergo: Validation of Soft Classification Models using Partial Class Memberships: An Extended Concept of Sensitivity & Co. applied to Grading of Astrocytoma Tissues, Chemom. Intell. Lab. Syst., 122 (2013), 12 - 22. So this lets me turn around your argumentation against the hold-out : Neither does resampling alone (necessarily) give you a good estimate of the variance, OTOH, if you can reason about the finite-test-sample-size-variance of the cross validation estimate, that is also possible for hold out. Our estimator for this single measurement would have been trained on a set (e.g. the CV set) that is smaller than our initial sample since we have to make room for the hold-out set. This results in a more biased (pessimistic) estimation in P1 . Not necessarily (if compared to k-fold) - but you have to trade off: small hold-out set (e.g. $\frac{1}{k}$ of the sample => low bias (≈ same as k-fold cv), high variance (> k-fold cv, roughly by a factor of k). It looks to me that reporting on the hold-out test set is bad practice since the analysis of the CV sample is more informative. Usually, yes. However, it is also good to keep in mind that there are important types of errors (such as drift) that cannot be measured/detected by resampling validation. See e.g. Esbensen, K. H. and Geladi, P. Principles of Proper Validation: use and abuse of re-sampling for validation, Journal of Chemometrics, 2010, 24, 168-187 but it looks to me that for the same number of total models trained (total # of folds) repeated K-fold would yield estimators that are less biased and more accurate than nested K-fold. To see this: Repeated K-fold uses a larger fraction of our total sample than nested K-fold for the same K (i.e. it leads to lower bias) I'd say no to this: it doesn't matter how the model training uses its $\frac{k - 1}{k} n$ training samples, as long as the surrogate models and the "real" model use them in the same way. (I look at the inner cross-validation / estimation of hyper-parameters as part of the model set-up). Things look different if you compare surrogate models which are trained including hyper-parameter optimization to "the" model which is trained on fixed hyper-parameters. But IMHO that is generalizing from $k$ apples to 1 orange. 100 iterations would only give 10 measurements of our estimator in nested K-fold (K=10), but 100 measurements in K-fold (more measurements leads to lower variance in P2) Whether this does make a difference depends on the instability of the (surrogate) models, see above. For stable models it is irrelevant. So may be whether you do 1000 or 100 outer repetitions/iterations. And this paper definitively belongs onto the reading list on this topic: Cawley, G. C. and Talbot, N. L. C. On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation, Journal of Machine Learning Research, 2010, 11, 2079-2107 | {
"source": [
"https://stats.stackexchange.com/questions/64147",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2798/"
]
} |
64,195 | I know that
$$\hat{\beta_0}=\bar{y}-\hat{\beta_1}\bar{x}$$
and this is how far I got when I calculated the variance: \begin{align*}
Var(\hat{\beta_0}) &= Var(\bar{y} - \hat{\beta_1}\bar{x}) \\
&= Var((-\bar{x})\hat{\beta_1}+\bar{y}) \\
&= Var((-\bar{x})\hat{\beta_1})+Var(\bar{y}) \\
&= (-\bar{x})^2 Var(\hat{\beta_1}) + 0 \\
&= (\bar{x})^2 Var(\hat{\beta_1}) + 0 \\
&= \frac{\sigma^2 (\bar{x})^2}{\displaystyle\sum\limits_{i=1}^n (x_i - \bar{x})^2}
\end{align*} but that's far as I got. The final formula I'm trying to calculate is \begin{align*}
Var(\hat{\beta_0}) &= \frac{\sigma^2 n^{-1}\displaystyle\sum\limits_{i=1}^n x_i^2}{\displaystyle\sum\limits_{i=1}^n (x_i - \bar{x})^2}
\end{align*} I'm not sure how to get $$(\bar{x})^2 = \frac{1}{n}\displaystyle\sum\limits_{i=1}^n x_i^2$$ assuming my math is correct up to there. Is this the right path? \begin{align}
(\bar{x})^2 &= \left(\frac{1}{n}\displaystyle\sum\limits_{i=1}^n x_i\right)^2 \\
&= \frac{1}{n^2} \left(\displaystyle\sum\limits_{i=1}^n x_i\right)^2
\end{align} I'm sure it's simple, so the answer can wait for a bit if someone has a hint to push me in the right direction. | This is a self-study question, so I provide hints that will hopefully help to find the solution, and I'll edit the answer based on your feedbacks/progress. The parameter estimates that minimize the sum of squares are
\begin{align}
\hat{\beta}_0 &= \bar{y} - \hat{\beta}_1 \bar{x} , \\
\hat{\beta}_1 &= \frac{ \sum_{i = 1}^n(x_i - \bar{x})y_i }{ \sum_{i = 1}^n(x_i - \bar{x})^2 } .
\end{align}
To get the variance of $\hat{\beta}_0$, start from its expression and substitute the expression of $\hat{\beta}_1$, and do the algebra
$$
{\rm Var}(\hat{\beta}_0) = {\rm Var} (\bar{Y} - \hat{\beta}_1 \bar{x}) = \ldots
$$ Edit: We have
\begin{align}
{\rm Var}(\hat{\beta}_0)
&= {\rm Var} (\bar{Y} - \hat{\beta}_1 \bar{x}) \\
&= {\rm Var} (\bar{Y}) + (\bar{x})^2 {\rm Var} (\hat{\beta}_1)
- 2 \bar{x} {\rm Cov} (\bar{Y}, \hat{\beta}_1).
\end{align}
The two variance terms are
$$
{\rm Var} (\bar{Y})
= {\rm Var} \left(\frac{1}{n} \sum_{i = 1}^n Y_i \right)
= \frac{1}{n^2} \sum_{i = 1}^n {\rm Var} (Y_i)
= \frac{\sigma^2}{n},
$$
and
\begin{align}
{\rm Var} (\hat{\beta}_1)
&= \frac{ 1 }{ \left[\sum_{i = 1}^n(x_i - \bar{x})^2 \right]^2 }
\sum_{i = 1}^n(x_i - \bar{x})^2 {\rm Var} (Y_i) \\
&= \frac{ \sigma^2 }{ \sum_{i = 1}^n(x_i - \bar{x})^2 } ,
\end{align}
and the covariance term is
\begin{align}
{\rm Cov} (\bar{Y}, \hat{\beta}_1)
&= {\rm Cov} \left\{
\frac{1}{n} \sum_{i = 1}^n Y_i,
\frac{ \sum_{j = 1}^n(x_j - \bar{x})Y_j }{ \sum_{i = 1}^n(x_i - \bar{x})^2 }
\right \} \\
&= \frac{1}{n} \frac{ 1 }{ \sum_{i = 1}^n(x_i - \bar{x})^2 }
{\rm Cov} \left\{ \sum_{i = 1}^n Y_i, \sum_{j = 1}^n(x_j - \bar{x})Y_j \right\} \\
&= \frac{ 1 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\sum_{i = 1}^n (x_j - \bar{x}) \sum_{j = 1}^n {\rm Cov}(Y_i, Y_j) \\
&= \frac{ 1 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\sum_{i = 1}^n (x_j - \bar{x}) \sigma^2 \\
&= 0
\end{align}
since $\sum_{i = 1}^n (x_j - \bar{x})=0$. And since
$$\sum_{i = 1}^n(x_i - \bar{x})^2
= \sum_{i = 1}^n x_i^2 - 2 \bar{x} \sum_{i = 1}^n x_i
+ \sum_{i = 1}^n \bar{x}^2
= \sum_{i = 1}^n x_i^2 - n \bar{x}^2,
$$
we have
\begin{align}
{\rm Var}(\hat{\beta}_0)
&= \frac{\sigma^2}{n} + \frac{ \sigma^2 \bar{x}^2}{ \sum_{i = 1}^n(x_i - \bar{x})^2 } \\
&= \frac{\sigma^2 }{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }
\left\{ \sum_{i = 1}^n(x_i - \bar{x})^2 + n \bar{x}^2 \right\} \\
&= \frac{\sigma^2 \sum_{i = 1}^n x_i^2}{ n \sum_{i = 1}^n(x_i - \bar{x})^2 }.
\end{align} Edit 2 Why do we have
${\rm var} ( \sum_{i = 1}^n Y_i) = \sum_{i = 1}^n {\rm Var} (Y_i) $? The assumed model is $ Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$, where the $\epsilon_i$ are independant and identically distributed random variables with ${\rm E}(\epsilon_i) = 0$ and ${\rm var}(\epsilon_i) = \sigma^2$. Once we have a sample, the $X_i$ are known, the only random terms are the $\epsilon_i$. Recalling that for a random variable $Z$ and a constant $a$, we have ${\rm var}(a+Z) = {\rm var}(Z)$. Thus,
\begin{align}
{\rm var} \left( \sum_{i = 1}^n Y_i \right)
&= {\rm var} \left( \sum_{i = 1}^n \beta_0 + \beta_1 X_i + \epsilon_i \right)\\
&= {\rm var} \left( \sum_{i = 1}^n \epsilon_i \right)
= \sum_{i = 1}^n \sum_{j = 1}^n {\rm cov} (\epsilon_i, \epsilon_j)\\
&= \sum_{i = 1}^n {\rm cov} (\epsilon_i, \epsilon_i)
= \sum_{i = 1}^n {\rm var} (\epsilon_i)\\
&= \sum_{i = 1}^n {\rm var} (\beta_0 + \beta_1 X_i + \epsilon_i)
= \sum_{i = 1}^n {\rm var} (Y_i).\\
\end{align}
The 4th equality holds as ${\rm cov} (\epsilon_i, \epsilon_j) = 0$ for $i \neq j$ by the independence of the $\epsilon_i$. | {
"source": [
"https://stats.stackexchange.com/questions/64195",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27969/"
]
} |
64,680 | I'm interested in how one can calculate a quantile of a multivariate distribution. In the figures, I have drawn the 5% and 95% quantiles of a given univariate normal distribution (left). For the right multivariate normal distribution, I am imagining that an analog would be an isoline that encircles the base of the density function. Below is an example of my attempt to calculate this using the package mvtnorm - but to no success. I suppose this could be done by calculating a contour of the results of the multivariate density function, but I was wondering if there is another alternative ( e.g. , analog of qnorm ). Thanks for your help. Example: mu <- 5
sigma <- 2
vals <- seq(-2,12,,100)
ds <- dnorm(vals, mean=mu, sd=sigma)
plot(vals, ds, t="l")
qs <- qnorm(c(0.05, 0.95), mean=mu, sd=sigma)
abline(v=qs, col=2, lty=2)
#install.packages("mvtnorm")
require(mvtnorm)
n <- 2
mmu <- rep(mu, n)
msigma <- rep(sigma, n)
mcov <- diag(msigma^2)
mvals <- expand.grid(seq(-2,12,,100), seq(-2,12,,100))
mvds <- dmvnorm(x=mvals, mean=mmu, sigma=mcov)
persp(matrix(mvds,100,100), axes=FALSE)
mvqs <- qmvnorm(0.95, mean=mmu, sigma=mcov, tail = "both") #?
#ex. plot
png("tmp.png", width=8, height=4, units="in", res=400)
par(mfcol=c(1,2))
#univariate
plot(vals, ds, t="l")
qs <- qnorm(c(0.05, 0.95), mean=mu, sd=sigma)
abline(v=qs, col=2, lty=2)
#multivariate
pmat <- persp(seq(-2,12,,100), seq(-2,12,,100), matrix(mvds,100,100), axes=FALSE, shade=TRUE, lty=0)
cont <- contourLines(seq(-2,12,,100), seq(-2,12,,100), matrix(mvds,100,100), levels=0.05^2)
lines(trans3d(cont[[1]]$x, cont[[1]]$y, cont[[1]]$level, pmat), col=2, lty=2)
dev.off() | The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the same argument. Then you get
$$
({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = c
$$
where $\Sigma$ is the covariance matrix. That is exactly the equation of an ellipse; in the simplest case, $\mu=(0,0)$ and $\Sigma$ is diagonal, so you get
$$
\left(\frac{x}{\sigma_x}\right)^2+\left(\frac{y}{\sigma_y}\right)^2=c
$$
If $\Sigma$ is not diagonal, diagonalizing you get the same result. Now, you would have to integrate the pdf of the multivariate inside (or outside) the ellipse and request that this is equal to the quantile you want. Let's say that your quantiles are not the usual ones, but elliptical in principle (i.e. you are looking for the Highest Density Region, HDR, as Tim answer points out). I would change variables in the pdf to $z^2=(x/\sigma_x)^2+(y/\sigma_y)^2$, integrate in the angle and then for $z$ from $0$ to $\sqrt{c}$
$$
1-\alpha=\int_0^{\sqrt{c}}dz\frac{z\;e^{-z^2/2}}{2\pi}\int_0^{2\pi}d\theta=\int_0^{\sqrt{c}}z\;e^{-z^2/2}
$$
Then you substitute $s=-z^2/2$:
$$
\int_0^{\sqrt{c}}z\;e^{-z^2/2}=\int_{-c/2}^{0}e^sds=(1-e^{-c/2})$$ So in principle, you have to look for the ellipse centered in $\mu$, with axis over the eigenvectors of $\Sigma$ and effective radius $-2\ln\alpha$:
$$
({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = -2\ln{\alpha}
$$ | {
"source": [
"https://stats.stackexchange.com/questions/64680",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/10675/"
]
} |
64,739 | I've been studying the Cox Proportional Hazards model, and this question is glossed over in most texts. Cox proposed fitting the coefficients of the Hazard function using a partial likelihood method, but why not just fit the coefficients of a parametric Survival function using the maximum likelihood method and a linear model? In any cases where you have censored data, you could just find the area under the curve. For example, if your estimate is 380 with standard deviation of 80, and a sample is censored >300, then there is an 84% probability for that sample in the likelihood calculation assuming normal error. | If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is that you can still fit survival models without knowing (or assuming) the distribution. You give an example using the normal distribution, but most survival times (and other types of data that Cox PH regression is used for) do not come close to following a normal distribution. Some may follow a log-normal, or a Weibull, or other parametric distribution, and if you are willing to make that assumption then the maximum likelihood parametric approach is great. But in many real world cases we do not know what the appropriate distribution is (or even a close enough approximation). With censoring and covariates we cannot do a simple histogram and say "that looks like a ... distribution to me". So it is very useful to have a technique that works well without needing a specific distribution. Why use the hazard instead of the distribution function? Consider the following statement: "People in group A are twice as likely to die at age 80 as people in group B". Now that could be true because people in group B tend to live longer than those in group A, or it could be because people in group B tend to live shorter lives and most of them are dead long before age 80, giving a very small probability of them dying at 80 while enough people in group A live to 80 that a fair number of them will die at that age giving a much higher probability of death at that age. So the same statement could mean being in group A is better or worse than being in group B. What makes more sense is to say, of those people (in each group) that lived to 80, what proportion will die before they turn 81. That is the hazard (and the hazard is a function of the distribution function/survival function/etc.). The hazard is easier to work with in the semi-parametric model and can then give you information about the distribution. | {
"source": [
"https://stats.stackexchange.com/questions/64739",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28237/"
]
} |
64,788 | I performed multivariate logistic regression with the dependent variable Y being death at a nursing home within a certain period of entry and got the following results (note if the variables starts in A it is a continuous value while those starting in B are categorical): Call:
glm(Y ~ A1 + B2 + B3 + B4 + B5 + A6 + A7 + A8 + A9, data=mydata, family=binomial)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.0728 -0.2167 -0.1588 -0.1193 3.7788
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 20.048631 6.036637 3.321 0.000896 ***
A1 0.051167 0.016942 3.020 0.002527 **
B2 -0.664940 0.304299 -2.185 0.028878 *
B3 -2.825281 0.633072 -4.463 8.09e-06 ***
B4 -2.547931 0.957784 -2.660 0.007809 **
B5 -2.862460 1.385118 -2.067 0.038774 *
A6 -0.129808 0.041286 -3.144 0.001666 **
A7 0.020016 0.009456 2.117 0.034276 *
A8 -0.707924 0.253396 -2.794 0.005210 **
A9 0.003453 0.001549 2.229 0.025837 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 485.10 on 2206 degrees of freedom
Residual deviance: 417.28 on 2197 degrees of freedom
AIC: 437.28
Number of Fisher Scoring iterations: 7
(Intercept) A1 B2 B3 B4 B5 A6 A7 A8 A9
5.093426e+08 1.052499e+00 5.143045e-01 5.929197e-02 7.824340e-02 5.712806e-02 8.782641e-01 1.020218e+00 4.926657e-01 1.003459e+00
2.5 % 97.5 %
(Intercept) 3.703525e+03 7.004944e+13
A1 1.018123e+00 1.088035e+00
B2 2.832698e-01 9.337710e-01
B3 1.714448e-02 2.050537e-01
B4 1.197238e-02 5.113460e-01
B5 3.782990e-03 8.627079e-01
A6 8.099945e-01 9.522876e-01
A7 1.001484e+00 1.039302e+00
A8 2.998207e-01 8.095488e-01
A9 1.000416e+00 1.006510e+00 As you can see, all of the variables are "significant" in that their p values are below the usual threshold of 0.05. However looking at the coefficients, I'm not quite sure what to make of these results. It seems that although these variables contribute to the model, looking at the odds ratios, they don't seem to really seem to have much predictive power. Of note, when I calculated the AUC, I got approximately 0.8. Can I say that this model is better at predicting against mortality (e.g. predicting that seniors will live past the prescribed period) compared to predicting for mortality? | I would suggest that you use Frank Harrell's excellent rms package . It contains many useful functions to validate and calibrate your model. As far as I know, you cannot assess predictive performance solely based on the coefficients. Further, I would suggest that you use the bootstrap to validate the model. The AUC or concordance-index (c-index) is a useful measure of predictive performance. A c-index of $0.8$ is quite high but as in many predictive models, the fit of your model is likely overoptimistic (overfitting). This overoptimism can be assessed using bootstrap. But let me give an example: #-----------------------------------------------------------------------------
# Load packages
#-----------------------------------------------------------------------------
library(rms)
#-----------------------------------------------------------------------------
# Load data
#-----------------------------------------------------------------------------
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv")
mydata $rank <- factor(mydata$ rank)
#-----------------------------------------------------------------------------
# Fit logistic regression model
#-----------------------------------------------------------------------------
mylogit <- lrm(admit ~ gre + gpa + rank, x=TRUE, y=TRUE, data = mydata)
mylogit
Model Likelihood Discrimination Rank Discrim.
Ratio Test Indexes Indexes
Obs 400 LR chi2 41.46 R2 0.138 C 0.693
0 273 d.f. 5 g 0.838 Dxy 0.386
1 127 Pr(> chi2) <0.0001 gr 2.311 gamma 0.387
max |deriv| 2e-06 gp 0.167 tau-a 0.168
Brier 0.195
Coef S.E. Wald Z Pr(>|Z|)
Intercept -3.9900 1.1400 -3.50 0.0005
gre 0.0023 0.0011 2.07 0.0385
gpa 0.8040 0.3318 2.42 0.0154
rank=2 -0.6754 0.3165 -2.13 0.0328
rank=3 -1.3402 0.3453 -3.88 0.0001
rank=4 -1.5515 0.4178 -3.71 0.0002 On the bottom you see the usual regression coefficients with corresponding $p$ -values. On the top right, you see several discrimination indices. The C denotes the c-index (AUC), and a c-index of $0.5$ denotes random splitting whereas a c-index of $1$ denotes perfect prediction. Dxy is Somers' $D_{xy}$ rank correlation between the predicted probabilities and the observed responses. $D_{xy}$ has simple relationship with the c-index: $D_{xy}=2(c-0.5)$ . A $D_{xy}$ of $0$ occurs when the model's predictions are random and when $D_{xy}=1$ , the model is perfectly discriminating. In this case, the c-index is $0.693$ which is slightly better than chance but a c-index of $>0.8$ is good enough for predicting the outcomes of individuals. As said above, the model is likely overoptimistic. We now use bootstrap to quantify the optimism: #-----------------------------------------------------------------------------
# Validate model using bootstrap
#-----------------------------------------------------------------------------
my.valid <- validate(mylogit, method="boot", B=1000)
my.valid
index.orig training test optimism index.corrected n
Dxy 0.3857 0.4033 0.3674 0.0358 0.3498 1000
R2 0.1380 0.1554 0.1264 0.0290 0.1090 1000
Intercept 0.0000 0.0000 -0.0629 0.0629 -0.0629 1000
Slope 1.0000 1.0000 0.9034 0.0966 0.9034 1000
Emax 0.0000 0.0000 0.0334 0.0334 0.0334 1000
D 0.1011 0.1154 0.0920 0.0234 0.0778 1000
U -0.0050 -0.0050 0.0015 -0.0065 0.0015 1000
Q 0.1061 0.1204 0.0905 0.0299 0.0762 1000
B 0.1947 0.1915 0.1977 -0.0062 0.2009 1000
g 0.8378 0.9011 0.7963 0.1048 0.7331 1000
gp 0.1673 0.1757 0.1596 0.0161 0.1511 1000 Let's concentrate on the $D_{xy}$ which is at the top. The first column denotes the original index, which was $0.3857$ . The column called optimism denotes the amount of estimated overestimation by the model. The column index.corrected is the original estimate minus the optimism. In this case, the bias-corrected $D_{xy}$ is a bit smaller than the original. The bias-corrected c-index (AUC) is $c=\frac{1+ D_{xy}}{2}=0.6749$ . We can also calculate a calibration curve using resampling: #-----------------------------------------------------------------------------
# Calibration curve using bootstrap
#-----------------------------------------------------------------------------
my.calib <- calibrate(mylogit, method="boot", B=1000)
par(bg="white", las=1)
plot(my.calib, las=1)
n=400 Mean absolute error=0.016 Mean squared error=0.00034
0.9 Quantile of absolute error=0.025 The plot provides some evidence that our models is overfitting: the model underestimates low probabilities and overestimates high probabilities. There is also a systematic overestimation around $0.3$ . Predictive model building is a big topic and I suggest reading Frank Harrell's course notes . | {
"source": [
"https://stats.stackexchange.com/questions/64788",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9244/"
]
} |
64,825 | Should be feature selection performed only on training data (or all data)? I went through some discussions and papers such as Guyon (2003) and Singhi and Liu (2006) , but still not sure about right answer. My experiment setup is as follows: Dataset: 50-healthy controls & 50-disease patients (cca 200 features that can be relevant to disease prediction). Task is to diagnose disease based on available features. What I do is Take whole dataset and perform feature selection(FS). I keep only selected features for further processing Split to test and train, train classifier using train data and selected features. Then, apply classifier to test data (again using only selected features). Leave-one-out validation is used. obtain classification accuracy Averaging: repeat 1)-3) N times. $N=50$ (100). I would agree that doing FS on whole dataset can introduce some bias, but my opinion is that it is "averaged out" during averaging (step 4). Is that correct? (Accuracy variance is $<2\%$) 1 Guyon, I. (2003) "An Introduction to Variable and Feature Selection", The Journal of Machine Learning Research, Vol. 3, pp. 1157-1182 2 Singhi, S.K. and Liu, H. (2006) "Feature Subset Selection Bias for Classification Learning", Proceeding ICML '06 Proceedings of the 23rd international conference on Machine learning, pp. 849-856 | The procedure you are using will result in optimistically biased performance estimates, because you use the data from the test set used in steps 2 and 3 to decide which features used in step 1. Repeating the exercise reduces the variance of the performance estimate, not the bias, so the bias will not average out. To get an unbiased performance estimate, the test data must not be used in any way to make choices about the model, including feature selection. A better approach is to use nested cross-validation, so that the outer cross-validation provides an estimate of the performance obtainable using a method of constructing the model (including feature selection) and the inner cross-validation is used to select the features independently in each fold of the outer cross-validation. Then build your final predictive model using all the data. As you have more features than cases, you are very likely to over-fit the data simply by feature selection. It is a bit of a myth that feature selection should be expected to improve predictive performance, so if that is what you are interested in (rather than identifying the relevant features as an end in itself) then you are probably better off using ridge regression and not performing any feature selection. This will probably give better predictive performance than feature selection, provided the ridge parameter is selected carefully (I use minimisation of Allen's PRESS statistic - i.e. the leave-one-out estimate of the mean-squared error). For further details, see Ambroise and McLachlan , and my answer to this question . | {
"source": [
"https://stats.stackexchange.com/questions/64825",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28211/"
]
} |
64,991 | There are numerous threads in CrossValidated on the topic of model selection and cross validation. Here are a few: Internal vs external cross-validation and model selection @DikranMarsupial's top answer to Feature selection and cross-validation However, the answers to those threads are fairly generic and mostly highlight the issues with particular approaches to cross validation and model selection. To make things as concrete as possible , say for example that we are working with an SVM with an RBF kernel:
$K(x, x' ) = (\gamma \, \vert x - x'\vert)^2$, and that I have a dataset of features X and labels y , and that I want to Find the best possible values of my model ($\gamma$ and $C$ ) Train the SVM with my dataset (for final deployment) Estimate the generalization error and the uncertainty (variance) around this error To do so, I would personally do a grid search, e.g. I try every possible combination of $C$ and $\gamma$. For simplicity, we can assume the following ranges: $C \in \{10, 100, 1000\}$ $\gamma \in \{0.1, 0.2, 0.5, 1.0\}$ More specifically, using my full dataset I do the following: For every ($C$,$\gamma$) pair, I do repeated iterations (e.g. 100 random repetitions) of $K$-fold cross validation (e.g. $K=10$), on my dataset, i.e. I train my SVM on $K-1$ folds and evaluate the error on the fold left, iterating through all $K$ folds. Overall, I collect 100 x 10 = 1000 test errors. For each such ($C$,$\gamma$) pair, I compute the mean and the variance of those 1000 test errors $\mu_M, \sigma_M$. Now I want to choose the best model (the best kernel parameters) that I would use to train my final SVM on the full dataset. My understanding is that choosing the model that had the lowest error mean and variance $\mu_M$ and $\sigma_M$ would be the right choice, and that this model's $\mu_M$ are $\sigma_M$ are my best estimates of the model's generalization error bias and variance when training with the full dataset. BUT, after reading the answers in the threads above, I am getting the impression that this method for choosing the best SVM for deployment and/or for estimating its error (generalization performance), is flawed, and that there are better ways of choosing the best SVM and reporting its error. If so, what are they? I am looking for a concrete answer please. Sticking to this problem, how specifically can I choose the best model and properly estimate its generalization error ? | My paper in JMLR addresses this exact question, and demonstrates why the procedure suggested in the question (or at least one very like it) results in optimistically biased performance estimates: Gavin C. Cawley, Nicola L. C. Talbot, "On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation", Journal of Machine Learning Research, 11(Jul):2079−2107, 2010. ( www ) The key thing to remember is that cross-validation is a technique for estimating the generalisation performance for a method of generating a model, rather than of the model itself. So if choosing kernel parameters is part of the process of generating the model, you need to cross-validate the model selection process as well, otherwise you will end up with an optimistically biased performance estimate (as will happen with the procedure you propose). Assume you have a function fit_model, which takes in a dataset consisting of attributes X and desired responses Y, and which returns the fitted model for that dataset, including the tuning of hyper-parameters (in this case kernel and regularisation parameters). This tuning of hyper-parameters can be performed in many ways, for example minimising the cross-validation error over X and Y. Step 1 - Fit the model to all available data, using the function fit_model. This gives you the model that you will use in operation or deployment. Step 2 - Performance evaluation. Perform repeated cross-validation using all available data. In each fold, the data are partitioned into a training set and a test set. Fit the model using the training set (record hyper-parameter values for the fitted model) and evaluate performance on the test set. Use the mean over all of the test sets as a performance estimate (and perhaps look at the spread of values as well). Step 3 - Variability of hyper-parameter settings - perform analysis of hyper-parameter values collected in step 3. However I should point out that there is nothing special about hyper-parameters, they are just parameters of the model that have been estimated (indirectly) from the data. They are treated as hyper-parameters rather than parameters for computational/mathematical convenience, but this doesn't have to be the case. The problem with using cross-validation here is that the training and test data are not independent samples (as they share data) which means that the estimate of the variance of the performance estimate and of the hyper-parameters is likely to be biased (i.e. smaller than it would be for genuinely independent samples of data in each fold). Rather than repeated cross-validation, I would probably use bootstrapping instead and bag the resulting models if this was computationally feasible. The key point is that to get an unbiased performance estimate, whatever procedure you use to generate the final model (fit_model) must be repeated in its entirety independently in each fold of the cross-validation procedure. | {
"source": [
"https://stats.stackexchange.com/questions/64991",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2798/"
]
} |
65,128 | How can one use nested cross validation for model selection ? From what I read online, nested CV works as follows: There is the inner CV loop, where we may conduct a grid search (e.g. running K-fold for every available model, e.g. combination of hyperparameters/features) There is the outer CV loop, where we measure the performance of the model that won in the inner fold, on a separate external fold. At the end of this process we end up with $K$ models ($K$ being the number of folds in the outer loop). These models are the ones that won in the grid search within the inner CV, and they are likely different (e.g. SVMs with different kernels, trained with possibly different features, depending on the grid search). How do I choose a model from this output? It looks to me that selecting the best model out of those $K$ winning models would not be a fair comparison since each model was trained and tested on different parts of the dataset. So how can I use nested CV for model selection? Also I have read threads discussing how nested model selection is useful for analyzing the learning procedure. What types of analysis /checks can I do with the scores that I get from the outer K folds? | How do I choose a model from this [outer cross validation] output? Short answer: You don't. Treat the inner cross validation as part of the model fitting procedure. That means that the fitting including the fitting of the hyper-parameters (this is where the inner cross validation hides) is just like any other model esitmation routine. The outer cross validation estimates the performance of this model fitting approach. For that you use the usual assumptions the $k$ outer surrogate models are equivalent to the "real" model built by model.fitting.procedure with all data. Or, in case 1. breaks down (pessimistic bias of resampling validation), at least the $k$ outer surrogate models are equivalent to each other. This allows you to pool (average) the test results. It also means that you do not need to choose among them as you assume that they are basically the same.
The breaking down of this second, weaker assumption is model instability. Do not pick the seemingly best of the $k$ surrogate models - that would usually be just "harvesting" testing uncertainty and leads to an optimistic bias. So how can I use nested CV for model selection? The inner CV does the selection. It looks to me that selecting the best model out of those K winning models would not be a fair comparison since each model was trained and tested on different parts of the dataset. You are right in that it is no good idea to pick one of the $k$ surrogate models. But you are wrong about the reason. Real reason: see above. The fact that they are not trained and tested on the same data does not "hurt" here. Not having the same testing data: as you want to claim afterwards that the test results generalize to never seen data, this cannot make a difference. Not having the same training data: if the models are stable, this doesn't make a difference: Stable here means that the model does not change (much) if the training data is "perturbed" by replacing a few cases by other cases. if the models are not stable, three considerations are important: you can actually measure whether and to which extent this is the case, by using iterated/repeated $k$ -fold cross validation. That allows you to compare cross validation results for the same case that were predicted by different models built on slightly differing training data. If the models are not stable, the variance observed over the test results of the $k$ -fold cross validation increases: you do not only have the variance due to the fact that only a finite number of cases is tested in total, but have additional variance due to the instability of the models (variance in the predictive abilities). If instability is a real problem, you cannot extrapolate well to the performance for the "real" model. Which brings me to your last question: What types of analysis /checks can I do with the scores that I get from the outer K folds? check for stability of the predictions (use iterated/repeated cross-validation) check for the stability/variation of the optimized hyper-parameters. For one thing, wildly scattering hyper-parameters may indicate that the inner optimization didn't work. For another thing, this may allow you to decide on the hyperparameters without the costly optimization step in similar situations in the future. With costly I do not refer to computational resources but to the fact that this "costs" information that may better be used for estimating the "normal" model parameters. check for the difference between the inner and outer estimate of the chosen model. If there is a large difference (the inner being very overoptimistic), there is a risk that the inner optimization didn't work well because of overfitting. update @user99889's question: What to do if outer CV finds instability? First of all, detecting in the outer CV loop that the models do not yield stable predictions in that respect doesn't really differ from detecting that the prediciton error is too high for the application. It is one of the possible outcomes of model validation (or verification) implying that the model we have is not fit for its purpose. In the comment answering @davips, I was thinking of tackling the instability in the inner CV - i.e. as part of the model optimization process. But you are certainly right: if we change our model based on the findings of the outer CV, yet another round of independent testing of the changed model is necessary. However, instability in the outer CV would also be a sign that the optimization wasn't set up well - so finding instability in the outer CV implies that the inner CV did not penalize instability in the necessary fashion - this would be my main point of critique in such a situation. In other words, why does the optimization allow/lead to heavily overfit models? However, there is one peculiarity here that IMHO may excuse the further change of the "final" model after careful consideration of the exact circumstances : As we did detect overfitting, any proposed change (fewer d.f./more restrictive or aggregation) to the model would be in direction of less overfitting (or at least hyperparameters that are less prone to overfitting). The point of independent testing is to detect overfitting - underfitting can be detected by data that was already used in the training process. So if we are talking, say, about further reducing the number of latent variables in a PLS model that would be comparably benign (if the proposed change would be a totally different type of model, say PLS instead of SVM, all bets would be off), and I'd be even more relaxed about it if I'd know that we are anyways in an intermediate stage of modeling - after all, if the optimized models are still unstable, there's no question that more cases are needed. Also, in many situations, you'll eventually need to perform studies that are designed to properly test various aspects of performance (e.g. generalization to data acquired in the future).
Still, I'd insist that the full modeling process would need to be reported, and that the implications of these late changes would need to be carefully discussed. Also, aggregation including and out-of-bag analogue CV estimate of performance would be possible from the already available results - which is the other type of "post-processing" of the model that I'd be willing to consider benign here. Yet again, it then would have been better if the study were designed from the beginning to check that aggregation provides no advantage over individual predcitions (which is another way of saying that the individual models are stable). Update (2019): the more I think about these situations, the more I come to favor the "nested cross validation apparently without nesting" approach . | {
"source": [
"https://stats.stackexchange.com/questions/65128",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2798/"
]
} |
65,692 | Canonical correlation analysis (CCA) is a technique related to principal component analysis (PCA). While it is easy to teach PCA or linear regression using a scatter plot (see a few thousand examples on google image search), I have not seen a similar intuitive two-dimensional example for CCA. How to explain visually what linear CCA does? | Well, I think it is really difficult to present a visual explanation of Canonical correlation analysis (CCA) vis-a-vis Principal components analysis (PCA) or Linear regression . The latter two are often explained and compared by means of a 2D or 3D data scatterplots, but I doubt if that is possible with CCA. Below I've drawn pictures which might explain the essence and the differences in the three procedures, but even with these pictures - which are vector representations in the "subject space" - there are problems with capturing CCA adequately. (For algebra/algorithm of canonical correlation analysis look in here .) Drawing individuals as points in a space where the axes are variables, a usual scatterplot, is a variable space . If you draw the opposite way - variables as points and individuals as axes - that will be a subject space . Drawing the many axes is actually needless because the space has the number of non-redundant dimensions equal to the number of non-collinear variables. Variable points are connected with the origin and form vectors, arrows, spanning the subject space; so here we are ( see also ). In a subject space, if variables have been centered, the cosine of the angle between their vectors is Pearson correlation between them, and the vectors' lengths squared are their variances . On the pictures below the variables displayed are centered (no need for a constant arises). Principal Components Variables $X_1$ and $X_2$ positively correlate: they have acute angle between them. Principal components $P_1$ and $P_2$ lie in the same space "plane X" spanned by the two variables. The components are variables too, only mutually orthogonal (uncorrelated). The direction of $P_1$ is such as to maximize the sum of the two squared loadings of this component; and $P_2$ , the remaining component, goes orthogonally to $P_1$ in plane X. The squared lengths of all the four vectors are their variances (the variance of a component is the aforementioned sum of its squared loadings). Component loadings are the coordinates of variables onto the components - $a$ 's shown on the left pic. Each variable is the error-free linear combination of the two components, with the corresponding loadings being the regression coefficients. And vice versa , each component is the error-free linear combination of the two variables; the regression coefficients in this combination are given by the skew coordinates of the components onto the variables - $b$ 's shown on the right pic. The actual regression coefficient magnitude will be $b$ divided by the product of lengths (standard deviations) of the predicted component and the predictor variable, e.g. $b_{12}/(|P_1|*|X_2|)$ . [Footnote: The components' values appearing in the mentioned above two linear combinations are standardized values, st. dev. = 1. This because the information about their variances is captured by the loadings . To speak in terms of unstandardized component values, $a$ 's on the pic above should be eigenvectors ' values, the rest of the reasoning being the same.] Multiple Regression Whereas in PCA everything lies in plane X, in multiple regression there appears a dependent variable $Y$ which usually doesn't belong to plane X, the space of the predictors $X_1$ , $X_2$ . But $Y$ is perpendicularly projected onto plane X, and the projection $Y'$ , the $Y$ 's shade, is the prediction by or linear combination of the two $X$ 's. On the picture, the squared length of $e$ is the error variance. The cosine between $Y$ and $Y'$ is the multiple correlation coefficient. Like it was with PCA, the regression coefficients are given by the skew coordinates of the prediction ( $Y'$ ) onto the variables - $b$ 's. The actual regression coefficient magnitude will be $b$ divided by the length (standard deviation) of the predictor variable, e.g. $b_{2}/|X_2|$ . Canonical Correlation In PCA, a set of variables predict themselves: they model principal components which in turn model back the variables, you don't leave the space of the predictors and (if you use all the components) the prediction is error-free. In multiple regression, a set of variables predict one extraneous variable and so there is some prediction error. In CCA, the situation is similar to that in regression, but (1) the extraneous variables are multiple, forming a set of their own; (2) the two sets predict each other simultaneously (hence correlation rather than regression); (3) what they predict in each other is rather an extract, a latent variable, than the observed predictand of a regression ( see also ). Let's involve the second set of variables $Y_1$ and $Y_2$ to correlate canonically with our $X$ 's set. We have spaces - here, planes - X and Y. It should be notified that in order the situation to be nontrivial - like that was above with regression where $Y$ stands out of plane X - planes X and Y must intersect only in one point, the origin. Unfortunately it is impossible to draw on paper because 4D presentation is necessary. Anyway, the grey arrow indicates that the two origins are one point and the only one shared by the two planes. If that is taken, the rest of the picture resembles what was with regression. $V_x$ and $V_y$ are the pair of canonical variates. Each canonical variate is the linear combination of the respective variables, like $Y'$ was. $Y'$ was the orthogonal projection of $Y$ onto plane X. Here $V_x$ is a projection of $V_y$ on plane X and simultaneously $V_y$ is a projection of $V_x$ on plane Y, but they are not orthogonal projections. Instead, they are found (extracted) so as to minimize the angle $\phi$ between them . Cosine of that angle is the canonical correlation. Since projections need not be orthogonal, lengths (hence variances) of the canonical variates are not automatically determined by the fitting algorithm and are subject to conventions/constraints which may differ in different implementations. The number of pairs of canonical variates (and hence the number of canonical correlations) is min(number of $X$ s, number of $Y$ s). And here comes the time when CCA resembles PCA. In PCA, you skim mutually orthogonal principal components (as if) recursively until all the multivariate variability is exhausted. Similarly, in CCA mutually orthogonal pairs of maximally correlated variates are extracted until all the multivariate variability that can be predicted in the lesser space (lesser set) is up. In our example with $X_1$ $X_2$ vs $Y_1$ $Y_2$ there remains the second and weaker correlated canonical pair $V_{x(2)}$ (orthogonal to $V_x$ ) and $V_{y(2)}$ (orthogonal to $V_y$ ). For the difference between CCA and PCA+regression see also Doing CCA vs. building a dependent variable with PCA and then doing regression . What is the benefit of canonical correlation over individual Pearson correlations of pairs of variables from the two sets? (my answer's in comments). | {
"source": [
"https://stats.stackexchange.com/questions/65692",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28517/"
]
} |
65,705 | I need to calculate the sample Mahalanobis distance in R between every pair of observations in a $n \times p$ matrix of covariates. I need a solution that is efficient, i.e. only $n(n-1)/2$ distances are calculated, and preferably implemented in C/RCpp/Fortran etc. I assume that $\Sigma$, the population covariance matrix, is unknown and use the sample covariance matrix in its place. I am particularly interested in this question since there seems to be no "consensus" method for calculating pairwise Mahalanobis distances in R, i.e. it is not implemented in the dist function nor in the cluster::daisy function. The mahalanobis function does not calculate pairwise distances without additional work from the programmer. This was already asked here Pairwise Mahalanobis distance in R , but the solutions there seem incorrect. Here is a correct but terribly inefficient (since $n \times n$ distances are calculated) method: set.seed(0)
x0 <- MASS::mvrnorm(33,1:10,diag(c(seq(1,1/2,l=10)),10))
dM = as.dist(apply(x0, 1, function(i) mahalanobis(x0, i, cov = cov(x0)))) This is easy enough to code myself in C, but I feel like something this basic should have a preexisting solution. Is there one? There are other solutions that fall short: HDMD::pairwise.mahalanobis() calculates $n \times n$ distances, when only $n(n-1)/2$ unique distances are required. compositions::MahalanobisDist() seems promising, but I don't want my function to come from a package that depends on rgl , which severely limits others' ability to run my code. Unless this implementation is perfect, I'd rather write my own. Anybody have experience with this function? | Starting from ahfoss's "succint" solution, I have used the Cholesky decomposition in place of the SVD. cholMaha <- function(X) {
dec <- chol( cov(X) )
tmp <- forwardsolve(t(dec), t(X) )
dist(t(tmp))
} It should be faster, because forward-solving a triangular system is faster then dense matrix multiplication with the inverse covariance ( see here ). Here are the benchmarks with ahfoss's and whuber's solutions in several settings: require(microbenchmark)
set.seed(26565)
N <- 100
d <- 10
X <- matrix(rnorm(N*d), N, d)
A <- cholMaha( X = X )
A1 <- fastPwMahal(x1 = X, invCovMat = solve(cov(X)))
sum(abs(A - A1))
# [1] 5.973666e-12 Ressuring!
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X))
Unit: microseconds
expr min lq median uq max neval
cholMaha 502.368 508.3750 512.3210 516.8960 542.806 100
fastPwMahal 634.439 640.7235 645.8575 651.3745 1469.112 100
mahal 839.772 850.4580 857.4405 871.0260 1856.032 100
N <- 10
d <- 5
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: microseconds
expr min lq median uq max neval
cholMaha 112.235 116.9845 119.114 122.3970 169.924 100
fastPwMahal 195.415 201.5620 205.124 208.3365 1273.486 100
mahal 163.149 169.3650 172.927 175.9650 311.422 100
N <- 500
d <- 15
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: milliseconds
expr min lq median uq max neval
cholMaha 14.58551 14.62484 14.74804 14.92414 41.70873 100
fastPwMahal 14.79692 14.91129 14.96545 15.19139 15.84825 100
mahal 12.65825 14.11171 39.43599 40.26598 41.77186 100
N <- 500
d <- 5
X <- matrix(rnorm(N*d), N, d)
microbenchmark(cholMaha(X),
fastPwMahal(x1 = X, invCovMat = solve(cov(X))),
mahal(x = X)
)
Unit: milliseconds
expr min lq median uq max neval
cholMaha 5.007198 5.030110 5.115941 5.257862 6.031427 100
fastPwMahal 5.082696 5.143914 5.245919 5.457050 6.232565 100
mahal 10.312487 12.215657 37.094138 37.986501 40.153222 100 So Cholesky seems to be uniformly faster. | {
"source": [
"https://stats.stackexchange.com/questions/65705",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28520/"
]
} |
66,088 | Say for example you are doing a linear model, but the data $y$ is complex. $ y = x \beta + \epsilon $ My data set is complex, as in all the numbers in $y$ are of the form $(a + bi)$. Is there anything procedurally different when working with such data? I ask because, you will end up getting complex covariance matrices, and test statistics which are complex valued.. Do you need to use a conjugate transposes instead of transposes when doing least squares? is a complex valued covariance meaningful? | Summary The generalization of least-squares regression to complex-valued variables is straightforward, consisting primarily of replacing matrix transposes by conjugate transposes in the usual matrix formulas. A complex-valued regression, though, corresponds to a complicated multivariate multiple regression whose solution would be much more difficult to obtain using standard (real variable) methods. Thus, when the complex-valued model is meaningful, using complex arithmetic to obtain a solution is strongly recommended. This answer also includes some suggested ways to display the data and present diagnostic plots of the fit. For simplicity, let's discuss the case of ordinary (univariate) regression, which can be written $$z_j = \beta_0 + \beta_1 w_j + \varepsilon_j.$$ I have taken the liberty of naming the independent variable $W$ and the dependent variable $Z$, which is conventional (see, for instance, Lars Ahlfors, Complex Analysis ). All that follows is straightforward to extend to the multiple regression setting. Interpretation This model has an easily visualized geometric interpretation: multiplication by $\beta_1$ will rescale $w_j$ by the modulus of $\beta_1$ and rotate it around the origin by the argument of $\beta_1$. Subsequently, adding $\beta_0$ translates the result by this amount. The effect of $\varepsilon_j$ is to "jitter" that translation a little bit. Thus, regressing the $z_j$ on the $w_j$ in this manner is an effort to understand the collection of 2D points $(z_j)$ as arising from a constellation of 2D points $(w_j)$ via such a transformation, allowing for some error in the process. This is illustrated below with the figure titled "Fit as a Transformation." Note that the rescaling and rotation are not just any linear transformation of the plane: they rule out skew transformations, for instance. Thus this model is not the same as a bivariate multiple regression with four parameters. Ordinary Least Squares To connect the complex case with the real case, let's write $z_j = x_j + i y_j$ for the values of the dependent variable and $w_j = u_j + i v_j$ for the values of the independent variable. Furthermore, for the parameters write $\beta_0 = \gamma_0 + i \delta_0$ and $\beta_1 = \gamma_1 +i \delta_1$. Every one of the new terms introduced is, of course, real, and $i^2 = -1$ is imaginary while $j=1, 2, \ldots, n$ indexes the data. OLS finds $\hat\beta_0$ and $\hat\beta_1$ that minimize the sum of squares of deviations, $$\sum_{j=1}^n ||z_j - \left(\hat\beta_0 + \hat\beta_1 w_j\right)||^2
= \sum_{j=1}^n \left(\bar z_j - \left(\bar{\hat\beta_0} + \bar{\hat\beta_1} \bar w_j\right)\right) \left(z_j - \left(\hat\beta_0 + \hat\beta_1 w_j\right)\right).$$ Formally this is identical to the usual matrix formulation: compare it to $\left(z - X\beta\right)'\left(z - X\beta\right).$ The only difference we find is that the transpose of the design matrix $X'$ is replaced by the conjugate transpose $X^* = \bar X '$. Consequently the formal matrix solution is $$\hat\beta = \left(X^*X\right)^{-1}X^* z.$$ At the same time, to see what might be accomplished by casting this into a purely real-variable problem, we may write the OLS objective out in terms of the real components: $$\sum_{j=1}^n \left(x_j-\gamma_0-\gamma_1u_j+\delta_1v_j\right)^2
+ \sum_{j=1}^n\left(y_j-\delta_0-\delta_1u_j-\gamma_1v_j\right)^2.$$ Evidently this represents two linked real regressions: one of them regresses $x$ on $u$ and $v$, the other regresses $y$ on $u$ and $v$; and we require that the $v$ coefficient for $x$ be the negative of the $u$ coefficient for $y$ and the $u$ coefficient for $x$ equal the $v$ coefficient for $y$. Moreover, because the total squares of residuals from the two regressions are to be minimized, it will usually not be the case that either set of coefficients gives the best estimate for $x$ or $y$ alone. This is confirmed in the example below, which carries out the two real regressions separately and compares their solutions to the complex regression. This analysis makes it apparent that rewriting the complex regression in terms of the real parts (1) complicates the formulas, (2) obscures the simple geometric interpretation, and (3) would require a generalized multivariate multiple regression (with nontrivial correlations among the variables) to solve. We can do better. Example As an example, I take a grid of $w$ values at integral points near the origin in the complex plane. To the transformed values $w\beta$ are added iid errors having a bivariate Gaussian distribution: in particular, the real and imaginary parts of the errors are not independent. It is difficult to draw the usual scatterplot of $(w_j, z_j)$ for complex variables, because it would consist of points in four dimensions. Instead we can view the scatterplot matrix of their real and imaginary parts. Ignore the fit for now and look at the top four rows and four left columns: these display the data. The circular grid of $w$ is evident in the upper left; it has $81$ points. The scatterplots of the components of $w$ against the components of $z$ show clear correlations. Three of them have negative correlations; only the $y$ (the imaginary part of $z$) and $u$ (the real part of $w$) are positively correlated. For these data, the true value of $\beta$ is $(-20 + 5i, -3/4 + 3/4\sqrt{3}i)$. It represents an expansion by $3/2$ and a counterclockwise rotation of 120 degrees followed by translation of $20$ units to the left and $5$ units up. I compute three fits: the complex least squares solution and two OLS solutions for $(x_j)$ and $(y_j)$ separately, for comparison. Fit Intercept Slope(s)
True -20 + 5 i -0.75 + 1.30 i
Complex -20.02 + 5.01 i -0.83 + 1.38 i
Real only -20.02 -0.75, -1.46
Imaginary only 5.01 1.30, -0.92 It will always be the case that the real-only intercept agrees with the real part of the complex intercept and the imaginary-only intercept agrees with the imaginary part fo the complex intercept. It is apparent, though, that the real-only and imaginary-only slopes neither agree with the complex slope coefficients nor with each other, exactly as predicted. Let's take a closer look at the results of the complex fit. First, a plot of the residuals gives us an indication of their bivariate Gaussian distribution. (The underlying distribution has marginal standard deviations of $2$ and a correlation of $0.8$.) Then, we can plot the magnitudes of the residuals (represented by sizes of the circular symbols) and their arguments (represented by colors exactly as in the first plot) against the fitted values: this plot should look like a random distribution of sizes and colors, which it does. Finally, we can depict the fit in several ways. The fit appeared in the last rows and columns of the scatterplot matrix ( q.v. ) and may be worth a closer look at this point. Below on the left the fits are plotted as open blue circles and arrows (representing the residuals) connect them to the data, shown as solid red circles. On the right the $(w_j)$ are shown as open black circles filled in with colors corresponding to their arguments; these are connected by arrows to the corresponding values of $(z_j)$. Recall that each arrow represents an expansion by $3/2$ around the origin, rotation by $120$ degrees, and translation by $(-20, 5)$, plus that bivariate Guassian error. These results, the plots, and the diagnostic plots all suggest that the complex regression formula works correctly and achieves something different than separate linear regressions of the real and imaginary parts of the variables. Code The R code to create the data, fits, and plots appears below. Note that the actual solution of $\hat\beta$ is obtained in a single line of code. Additional work--but not too much of it--would be needed to obtain the usual least squares output: the variance-covariance matrix of the fit, standard errors, p-values, etc. #
# Synthesize data.
# (1) the independent variable `w`.
#
w.max <- 5 # Max extent of the independent values
w <- expand.grid(seq(-w.max,w.max), seq(-w.max,w.max))
w <- complex(real=w[[1]], imaginary=w[[2]])
w <- w[Mod(w) <= w.max]
n <- length(w)
#
# (2) the dependent variable `z`.
#
beta <- c(-20+5i, complex(argument=2*pi/3, modulus=3/2))
sigma <- 2; rho <- 0.8 # Parameters of the error distribution
library(MASS) #mvrnorm
set.seed(17)
e <- mvrnorm(n, c(0,0), matrix(c(1,rho,rho,1)*sigma^2, 2))
e <- complex(real=e[,1], imaginary=e[,2])
z <- as.vector((X <- cbind(rep(1,n), w)) %*% beta + e)
#
# Fit the models.
#
print(beta, digits=3)
print(beta.hat <- solve(Conj(t(X)) %*% X, Conj(t(X)) %*% z), digits=3)
print(beta.r <- coef(lm(Re(z) ~ Re(w) + Im(w))), digits=3)
print(beta.i <- coef(lm(Im(z) ~ Re(w) + Im(w))), digits=3)
#
# Show some diagnostics.
#
par(mfrow=c(1,2))
res <- as.vector(z - X %*% beta.hat)
fit <- z - res
s <- sqrt(Re(mean(Conj(res)*res)))
col <- hsv((Arg(res)/pi + 1)/2, .8, .9)
size <- Mod(res) / s
plot(res, pch=16, cex=size, col=col, main="Residuals")
plot(Re(fit), Im(fit), pch=16, cex = size, col=col,
main="Residuals vs. Fitted")
plot(Re(c(z, fit)), Im(c(z, fit)), type="n",
main="Residuals as Fit --> Data", xlab="Real", ylab="Imaginary")
points(Re(fit), Im(fit), col="Blue")
points(Re(z), Im(z), pch=16, col="Red")
arrows(Re(fit), Im(fit), Re(z), Im(z), col="Gray", length=0.1)
col.w <- hsv((Arg(w)/pi + 1)/2, .8, .9)
plot(Re(c(w, z)), Im(c(w, z)), type="n",
main="Fit as a Transformation", xlab="Real", ylab="Imaginary")
points(Re(w), Im(w), pch=16, col=col.w)
points(Re(w), Im(w))
points(Re(z), Im(z), pch=16, col=col.w)
arrows(Re(w), Im(w), Re(z), Im(z), col="#00000030", length=0.1)
#
# Display the data.
#
par(mfrow=c(1,1))
pairs(cbind(w.Re=Re(w), w.Im=Im(w), z.Re=Re(z), z.Im=Im(z),
fit.Re=Re(fit), fit.Im=Im(fit)), cex=1/2) | {
"source": [
"https://stats.stackexchange.com/questions/66088",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13902/"
]
} |
66,315 | I recently started reading "Introduction to Bayesian Statistics" 2nd Edition by Bolstad. I've had an introductory stats class that covered mainly statistical tests and am almost through a class in regression analysis. What other books can I use to supplement my understanding this one? I've made it through the first 100-125 pages fine. Afterwards the book begins to talk hypothesis testing which is what I'm very excited to cover but there a couple of things throwing me: The use of probability density functions in calculations. In other words how to evaluate such equations. This whole sentence: "Suppose we use a beta(1,1) prior for pi. Then given y=8, the posterior density is beta(9,3). The posterior probability of the null hypothesis is..." I believe beta(1,1) refers to a PDF where the mean is 1 and the stdev is 1? I don't get how it would change to a beta(9,3) as a posterior density function. I do get the concept of priors vs posteriors and understand how to apply them using a table manually. I get (I think!) that pi represents the supposed population proportion or probability. I don't get how to connect this together with data I would run into on a day to day basis and get results. | The use of probability density functions in calculations. In other words how to evaluate such equations. I think you're still thinking of this from a frequentist perspective: if you're looking for a point estimate, the posterior won't give it to you. You put PDFs in, you get PDFs out. You can derive point estimates by calculating statistics from your posterior distribution, but I'll get to that in a bit. I do get the concept of priors vs posteriors and understand how to apply them using a table manually. I get (I think!) that pi represents the supposed population proportion or probability. $\pi(x)$ is the same thing as $p(x)$: they're both PDFs. $\pi$ is just conventionally used to denote that the particular PDF is a prior density. I suspect that you don't get priors and posteriors as well as you think you do, so let's back it up to the fundamental underpinning of Bayesian statistics: Subjective Probability . A Thought Experiment in Subjective Probability Let's say I present you with a coin and ask you whether or not you think this coin is a fair coin. You've heard a lot of people talk about unfair coins in probability class, but you've never actually seen one in real life, so you respond, "Yeah, sure, I think it's a fair coin." But, the fact that I'm even asking you this question puts you off a little, so although your estimation is that it's fair, you wouldn't really be surprised if it wasn't. Much less surprised than if you found this coin in your pocket change (because you assume that's all real currency, and you don't really trust me right now becaue I'm acting suspicious). Now, we run a few experiments. After 100 flips, the coin gives back 53 Heads. You're a lot more confident that it's a fair coin, but you're still open to the possibility that it's not. The difference is that now you would be pretty surprised if this coin turned out to have some sort of bias. How can we represent your prior and posterior beliefs here, specifically, regarding the probability that the coin will show heads (which we will denote $\theta$)? In a frequentist setting, your prior belief--your null hypothesis--is that $\theta = 0.5$. After running the experiment, you're not able to reject the null, and so you continue on with the assumption that yes, the coin is probably fair. But how do we encapsulate the change in your confidence that the coin is fair? After the experiment you are in a position that you would bet that the coin is fair, but before the experiment you would have been trepidatious. In the Bayesian setting, you encapsulate your confidence in propositions by not treating probabilities as scalar values but as random variables, i.e. functions. Instead of saying $\theta = 0.5$ we say $\theta \sim N(0.5, \sigma^2)$, and thereby encapsulate our confidence in the variance of the PDF. If we set a high variance, we're saying, "I think that the probability is 0.5, but I wouldn't be surprised if the probability I actually observe in the world is far away from this value. I think $\theta= 0.5$, but frankly I'm not really that sure." By setting a low variance, we're saying, "Not only do I believe the probability is 0.5, but I would be very surprised if experimentation provides a value that's not very close to $\theta=0.5$." So, in this example when you start the experiment you have a prior with high variance. After receiving data that corroborates your prior, the mean of the prior stayed the same, but the variance became much narrower. Our confidence that $\theta=0.5$ is much higher after running the experiment than before. So how do we perform calculations? We start with PDFs, and we end with PDFs. When you need to report a point estimate, you can calculate statistics like the mean, median or mode of your posterior distribution (depending on your loss function, which I won't get into now. Let's just stick with the mean). If you have a closed form solution for your PDF, it will probably be trivial to determine these values. If the posterior is complicated, you can use procedures like MCMC to sample from your posterior and derive statistics from the sample you drew. In the example where you have a Beta prior and a Binomial likelihood, the calculation of the posterior reduces to a very clean calculation. Given: Prior: $\theta \sim Beta(\alpha, \beta)$ Likelihood: $X|\theta \sim Binomial(\theta)$ Then the posterior reduces to: Posterior: $\theta|X \sim Beta(\alpha + \sum_{i=1}^n x_i,\, \beta + n - \sum_{i=1}^n x_i)$ This will happen any time you have a beta prior and a binomial likelihood, and the reason why should be evident in the calculations provided by DJE . When a particular prior-likelihood model always gives a posterior that has the same kind of distribution as the prior, the relationship between the types of distributions used for the prior and likelihood is called Conjugate . There are many pairs of distributions that have conjugate relationships, and conjugacy is very frequently leveraged by Bayesians to simplify calculations. Given a particular likelihood, you can make your life a lot easier by selecting a conjugate prior (if one exists and you can justify your choice of prior). I believe beta(1,1) refers to a PDF where the mean is 1 and the stdev is 1? In the common parameterization of the normal distribution, the two parameters signify the mean and standard deviation of the distribution. But that's just how we parameterize the normal distribution. Other probability distributions are parameterized very differently. The Beta distribution is usually parameterized as $Beta(\alpha, \beta)$ where $\alpha$ and $\beta$ are called "shape" parameters. The Beta distribution is extremely flexible and takes lots of different forms depending on how these parameters are set. To illustrate how different this parameterization is from your original assumption, here's how you calculate the mean and variance for Beta random variables: \begin{equation}
\begin{split}
X &\sim Beta(\alpha, \beta) \\
\operatorname{E}[X] &= \frac{\alpha}{\alpha + \beta} \\
\operatorname{var}[X] &= \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}
\end{split}
\end{equation} As you can clearly see, the mean and variance are not a part of the parameterization of this distribution, but they have closed form solutions that are simple functions of the input parameters. I won't go into detail describing the differences in parameterizations of other well known distributions, but I recommend you look up a few. Any basic text, even Wikipedia , should somewhat describe how changing the parameters modifies the distribution. You should also read up on the relationships between the different distributions (for instance, $Beta(1,1)$ is the same thing as $Uniform(0,1)$). | {
"source": [
"https://stats.stackexchange.com/questions/66315",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9426/"
]
} |
66,369 | I've found two definitions in the literature for the autocorrelation time of a weakly stationary time series: $$
\tau_a = 1+2\sum_{k=1}^\infty \rho_k \quad \text{versus} \quad \tau_b = 1+2\sum_{k=1}^\infty \left|\rho_k\right|
$$ where $\rho_k = \frac{\text{Cov}[X_t,X_{t+h}]}{\text{Var}[X_t]}$ is the autocorrelation at lag $k$. One application of the autocorrelation time is to find the "effective sample size": if you have $n$ observations of a time series, and you know its autocorrelation time $\tau$, then you can pretend that you have $$
n_\text{eff} = \frac{n}{\tau}
$$ independent samples instead of $n$ correlated ones for the purposes of finding the mean. Estimating $\tau$ from data is non-trivial, but there are a few ways of doing it (see Thompson 2010 ). The definition without absolute values, $\tau_a$, seems more common in the literature; but it admits the possibility of $\tau_a<1$. Using R and the "coda" package: require(coda)
ts.uncorr <- arima.sim(model=list(),n=10000) # white noise
ts.corr <- arima.sim(model=list(ar=-0.5),n=10000) # AR(1)
effectiveSize(ts.uncorr) # Sanity check
# result should be close to 10000
effectiveSize(ts.corr)
# result is in the neighborhood of 30000... ??? The "effectiveSize" function in "coda" uses a definition of the autocorrelation time equivalent to $\tau_a$, above. There are some other R packages out there that compute effective sample size or autocorrelation time, and all the ones I've tried give results consistent with this: that an AR(1) process with a negative AR coefficient has more effective samples than the correlated time series. This seems strange. Obviously, this can never happen in the $\tau_b$ definition of autocorrelation time. What is the correct definition of autocorrelation time? Is there something wrong with my understanding of effective sample sizes? The $n_\text{eff} > n$ result shown above seems like it must be wrong... what's going on? | First, the appropriate definition of "effective sample size" is IMO linked to a quite specific question. If $X_1, X_2, \ldots$ are identically distributed with mean $\mu$ and variance 1 the empirical mean
$$\hat{\mu} = \frac{1}{n} \sum_{k=1}^n X_k$$
is an unbiased estimator of $\mu$. But what about its variance? For independent variables the variance is $n^{-1}$. For a weakly stationary time series, the variance of $\hat{\mu}$ is
$$\frac{1}{n^2} \sum_{k, l=1}^n \text{cov}(X_k, X_l) = \frac{1}{n}\left(1 + 2\left(\frac{n-1}{n} \rho_1 + \frac{n-2}{n} \rho_2 + \ldots + \frac{1}{n} \rho_{n-1}\right) \right) \simeq \frac{\tau_a}{n}.$$
The approximation is valid for large enough $n$. If we define $n_{\text{eff}} = n/\tau_a$, the variance of the empirical mean for a weakly stationary time series is approximately $n_{\text{eff}}^{-1}$, which is the same variance as if we had $n_{\text{eff}}$ independent samples. Thus $n_{\text{eff}} = n/\tau_a$ is an appropriate definition if we ask for the variance of the empirical average. It might be inappropriate for other purposes. With a negative correlation between observations it is certainly possible that the variance can become smaller than $n^{-1}$ ($n_{\text{eff}} > n$). This is a well known variance reduction technique in Monto Carlo integration: If we introduce negative correlation between the variables instead of correlation 0, we can reduce the variance without increasing the sample size. | {
"source": [
"https://stats.stackexchange.com/questions/66369",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26753/"
]
} |
66,448 | I have several covariates in my calculation for a model, and not all of them are statistically significant. Should I remove those that are not? This question discusses the phenomenon, but does not answer my question: How to interpret non-significant effect of a covariate in ANCOVA? There is nothing in the answer to that question that suggests that non-significant covariates be taken out, though, so right now I am inclined to believe that they should stay in. Before even reading that answer, I was thinking the same since a covariate can still explain some of the variance (and thus help the model) without necessarily explaining an amount beyond some threshold (the significance threshold, which I see as not applicable to covariates). There is another question somewhere on CV for which the answer seems to imply that covariates should be kept in regardless of significance, but it is not clear on that. (I want to link to that question, but I was not able to track it down again just now.) So... Should covariates that do not show as statistically significant be kept in the calculation for the model? (I have edited this question to clarify that covariates are never in the model output by the calculation anyway.) To add complication, what if the covariates are statistically significant for some subsets of the data (subsets which have to be processed separately). I would default to keeping such a covariate, otherwise either different models would have to be used or you would have a statistically significant covariate missing in one of the cases. If you also have an answer for this split case, though, please mention it. | You have gotten several good answers already. There are reasons to keep covariates and reasons to drop covariates. Statistical significance should not be a key factor, in the vast majority of cases. Covariates may be of such substantive importance that they have to be there. The effect size of a covariate may be high, even if it is not significant. The covariate may affect other aspects of the model. The covariate may be a part of how your hypothesis was worded. If you are in a very exploratory mode and the covariate is not important in the literature and the effect size is small and the covariate has little effect on your model and the covariate was not in your hypothesis, then you could probably delete it just for simplicity. | {
"source": [
"https://stats.stackexchange.com/questions/66448",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27142/"
]
} |
66,543 | I'm experimenting with random forests with scikit-learn and I'm getting great results of my training set, but relatively poor results on my test set... Here is the problem (inspired from poker) which I'm trying to solve:
Given player A's hole cards, player B's hole cards and a flop (3 cards), which player has the best hand?
Mathematically, this is 14 inputs (7 cards -- one rank and one suit for each) and one output (0 or 1). Here are some of my results so far: Training set size: 600k, test set size: 120k, number of trees: 25
Success rate in training set: 99.975%
Success rate in testing set: 90.05%
Training set size: 400k, test set size: 80k, number of trees: 100
Success rate in training set: 100%
Success rate in testing set: 89.7%
Training set size: 600k, test set size: 120k, number of trees: 5
Success rate in training set: 98.685%
Success rate in testing set: 85.69% Here is the relevant code used: from sklearn.ensemble import RandomForestClassifier
Forest = RandomForestClassifier(n_estimators = 25) #n_estimator varies
Forest = Forest.fit(inputs[:trainingSetSize],outputs[:trainingSetSize])
trainingOutputs = Forest.predict(inputs[:trainingSetSize])
testOutputs = Forest.predict(inputs[trainingSetSize:]) It appears that regardless of the number of trees used, performance on training set is much better than on test set, despite a relatively large training set and a reasonably small number of features... | This is a common rookie error when using RF models (I'll put my hand up as a previous perpetrator). The forest that you build using the training set will in many cases fit the training data almost perfectly (as you are finding) when considered in totality. However, as the algorithm builds the forest it remembers the out-of-bag (OOB) prediction error, which is its best guess of the generalization error. If you send the training data back into the predict method (as you are doing) you get this almost perfect prediction (which is wildly optimistic) instead of the correct OOB error. Don't do this. Instead, the trained Forest object should have remembered within it the OOB error. I am unfamiliar with the scikit-learn implementation but looking at the documentation here it looks like you need to specify oob_score=True when calling the fit method, and then the generalization error will be stored as oob_score_ in the returned object. In the R package "randomForest", calling the predict method with no arguments on the returned object will return the OOB prediction on the training set. That lets you define the error using some other measure. Sending the training set back into the predict method will give you a different result, as that will use all the trees. I don't know if the scikit-learn implementation will do this or not. It is a mistake to send the training data back into the predict method in order to test the accuracy. It's a very common mistake though, so don't worry. | {
"source": [
"https://stats.stackexchange.com/questions/66543",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8653/"
]
} |
66,586 | I'm creating Poisson GLMs in R. To check for overdispersion I'm looking at the ratio of residual deviance to degrees of freedom provided by summary(model.name) . Is there a cutoff value or test for this ratio to be considered "significant?" I know that if it's >1 then the data are overdispersed, but if I have ratios relatively close to 1 [for example, one ratio of 1.7 (residual deviance = 25.48, df=15) and another of 1.3 (rd = 324, df = 253)], should I still switch to quasipoisson/negative binomial? I found here this test for significance: 1-pchisq(residual deviance,df), but I've only seen that once, which makes me nervous. I also read (I can't find the source) that a ratio < 1.5 is generally safe. Opinions? | In the R package AER you will find the function dispersiontest , which implements a Test for Overdispersion by Cameron & Trivedi (1990). It follows a simple idea: In a Poisson model, the mean is $E(Y)=\mu$ and the variance is $Var(Y)=\mu$ as well. They are equal. The test simply tests this assumption as a null hypothesis against an alternative where $Var(Y)=\mu + c * f(\mu)$ where the constant $c < 0$ means underdispersion and $c > 0$ means overdispersion. The function $f(.)$ is some monoton function (often linear or quadratic; the former is the default).The resulting test is equivalent to testing $H_0: c=0$ vs. $H_1: c \neq 0$ and the test statistic used is a $t$ statistic which is asymptotically standard normal under the null. Example: R> library(AER)
R> data(RecreationDemand)
R> rd <- glm(trips ~ ., data = RecreationDemand, family = poisson)
R> dispersiontest(rd,trafo=1)
Overdispersion test
data: rd
z = 2.4116, p-value = 0.007941
alternative hypothesis: true dispersion is greater than 0
sample estimates:
dispersion
5.5658 Here we clearly see that there is evidence of overdispersion (c is estimated to be 5.57) which speaks quite strongly against the assumption of equidispersion (i.e. c=0). Note that if you not use trafo=1 , it will actually do a test of $H_0: c^*=1$ vs. $H_1: c^* \neq 1$ with $c^*=c+1$ which has of course the same result as the other test apart from the test statistic being shifted by one. The reason for this, though, is that the latter corresponds to the common parametrization in a quasi-Poisson model. | {
"source": [
"https://stats.stackexchange.com/questions/66586",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27319/"
]
} |
66,610 | I would like to generate pairs of random numbers with certain correlation. However, the usual approach of using a linear combination of two normal variables is not valid here, because a linear combination of uniform variables is not any more an uniformly distributed variable. I need the two variables to be uniform. Any idea on how to generate pairs of uniform variables with a given correlation? | I'm not aware of a universal method to generate correlated random variables with any given marginal distributions. So, I'll propose an ad hoc method to generate pairs of uniformly distributed random variables with a given (Pearson) correlation.
Without loss of generality, I assume that the desired marginal distribution is standard uniform (i.e., the support is $[0, 1]$). The proposed approach relies on the following: a) For standard uniform random variables $U_1$ and $U_2$ with respective distribution functions $F_1$ and $F_2$, we have $F_i(U_i) = U_i$, for $i = 1, 2$.
Thus, by definition Spearman's rho is
$$
\rho_{\rm S}(U_1, U_2) = {\rm corr}(F_1(U_1), F_2(U_2)) = {\rm corr}(U_1, U_2) .
$$
So, Spearman's rho and Pearson's correlation coefficient are equal (sample versions might however differ). b) If $X_1, X_2$ are random variables with continuous margins and Gaussian copula with (Pearson) correlation coefficient $\rho$, then Spearman's rho is
$$
\rho_{\rm S}(X_1, X_2) = \frac{6}{\pi} \arcsin \left(\frac{\rho}{2}\right) .
$$
This makes it easy to generate random variables that have a desired value of Spearman's rho. The approach is to generate data from the Gaussian copula with an appropriate correlation coefficient $\rho$ such that the Spearman's rho corresponds to the desired correlation for the uniform random variables. Simulation algorithm Let $r$ denote the desired level of correlation, and $n$ the number of pairs to be generated.
The algorithm is: Compute $\rho = 2\sin (r \pi/6)$. Generate a pair of random variables from the Gaussian copula (e.g., with this approach ) Repeat step 2 $n$ times. Example The following code is an example of implementation of this algorithm using R with a target correlation $r = 0.6$ and $n = 500$ pairs. ## Initialization and parameters
set.seed(123)
r <- 0.6 # Target (Spearman) correlation
n <- 500 # Number of samples
## Functions
gen.gauss.cop <- function(r, n){
rho <- 2 * sin(r * pi/6) # Pearson correlation
P <- toeplitz(c(1, rho)) # Correlation matrix
d <- nrow(P) # Dimension
## Generate sample
U <- pnorm(matrix(rnorm(n*d), ncol = d) %*% chol(P))
return(U)
}
## Data generation and visualization
U <- gen.gauss.cop(r = r, n = n)
pairs(U, diag.panel = function(x){
h <- hist(x, plot = FALSE)
rect(head(h$breaks, -1), 0, tail(h$breaks, -1), h$counts/max(h$counts))}) In the figure below, the diagonal plots show histograms of variables $U_1$ and $U_2$, and off-diagonal plots show scatter plots of $U_1$ and $U_2$. By constuction, the random variables have uniform margins and a correlation coefficient (close to) $r$. But due to the effect of sampling, the correlation coefficient of the simulated data is not exactly equal to $r$. cor(U)[1, 2]
# [1] 0.5337697 Note that the gen.gauss.cop function should work with more than two variables simply by specifying a larger correlation matrix. Simulation study The following simulation study repeated for target correlation $r= -0.5, 0.1, 0.6$ suggests that the distribution of the correlation coefficient converges to the desired correlation as the sample size $n$ increases. ## Simulation
set.seed(921)
r <- 0.6 # Target correlation
n <- c(10, 50, 100, 500, 1000, 5000); names(n) <- n # Number of samples
S <- 1000 # Number of simulations
res <- sapply(n,
function(n, r, S){
replicate(S, cor(gen.gauss.cop(r, n))[1, 2])
},
r = r, S = S)
boxplot(res, xlab = "Sample size", ylab = "Correlation")
abline(h = r, col = "red") | {
"source": [
"https://stats.stackexchange.com/questions/66610",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17115/"
]
} |
66,791 | (First of all, just to confirm, an offset variable functions basically the same way in Poisson and negative binomial regression, right?) Reading about the use of an offset variable, it seems to me that most sources recommend including that variable as an option in statistical packages (exp() in Stata or offset() in R). Is that functionally the same as converting your outcome variable to a proportion if you're modeling count data and there is a finite number that the count could have happened? My example is looking at employee dismissal, and I believe the offset here would simply be log(number of employees). And as an added question, I am having trouble conceptualizing what the difference is between these first two options (including exposure as an option in the software and converting the DV to a proportion) and including the exposure on the RHS as a control. Any help here would be appreciated. | Recall that an offset is just a predictor variable whose coefficient is fixed at 1. So, using the standard setup for a Poisson regression with a log link, we have: $$\log \mathrm{E}(Y) = \beta' \mathrm{X} + \log \mathcal{E}$$ where $\mathcal{E}$ is the offset/exposure variable. This can be rewritten as $$\log \mathrm{E}(Y) - \log \mathcal{E} = \beta' \mathrm{X}$$
$$\log \mathrm{E}(Y/\mathcal{E}) = \beta' \mathrm{X}$$ Your underlying random variable is still $Y$, but by dividing by $\mathcal{E}$ we've converted the LHS of the model equation to be a rate of events per unit exposure. But this division also alters the variance of the response, so we have to weight by $\mathcal{E}$ when fitting the model. Example in R: library(MASS) # for Insurance dataset
# modelling the claim rate, with exposure as a weight
# use quasipoisson family to stop glm complaining about nonintegral response
glm(Claims/Holders ~ District + Group + Age,
family=quasipoisson, data=Insurance, weights=Holders)
Call: glm(formula = Claims/Holders ~ District + Group + Age, family = quasipoisson,
data = Insurance, weights = Holders)
Coefficients:
(Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C
-1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737
Degrees of Freedom: 63 Total (i.e. Null); 54 Residual
Null Deviance: 236.3
Residual Deviance: 51.42 AIC: NA
# with log-exposure as offset
glm(Claims ~ District + Group + Age + offset(log(Holders)),
family=poisson, data=Insurance)
Call: glm(formula = Claims ~ District + Group + Age + offset(log(Holders)),
family = poisson, data = Insurance)
Coefficients:
(Intercept) District2 District3 District4 Group.L Group.Q Group.C Age.L Age.Q Age.C
-1.810508 0.025868 0.038524 0.234205 0.429708 0.004632 -0.029294 -0.394432 -0.000355 -0.016737
Degrees of Freedom: 63 Total (i.e. Null); 54 Residual
Null Deviance: 236.3
Residual Deviance: 51.42 AIC: 388.7 | {
"source": [
"https://stats.stackexchange.com/questions/66791",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/21971/"
]
} |
66,939 | I am trying to figure out what the manifold assumption means in semi-supervised learning. Can anyone explain in a simple way? I cannot get the intuition behind it. It says that your data lie on a low-dimensional manifold embedded in a
higher-dimensional space. I didn't get what that means. | Imagine that you have a bunch of seeds fastened on a glass plate, which is resting horizontally on a table. Because of the way we typically think about space, it would be safe to say that these seeds live in a two-dimensional space, more or less, because each seed can be identified by the two numbers that give that seed's coordinates on the surface of the glass. Now imagine that you take the plate and tilt it diagonally upwards, so that the surface of the glass is no longer horizontal with respect to the ground. Now, if you wanted to locate one of the seeds, you have a couple of options. If you decide to ignore the glass, then each seed would appear to be floating in the three-dimensional space above the table, and so you'd need to describe each seed's location using three numbers, one for each spatial direction. But just by tilting the glass, you haven't changed the fact that the seeds still live on a two-dimensional surface. So you could describe how the surface of the glass lies in three-dimensional space, and then you could describe the locations of the seeds on the glass using your original two dimensions. In this thought experiment, the glass surface is akin to a low-dimensional manifold that exists in a higher-dimensional space : no matter how you rotate the plate in three dimensions, the seeds still live along the surface of a two-dimensional plane. Examples More generally, a low-dimensional manifold embedded in a higher-dimensional space is just a set of points that, for whatever reason, are considered to be connected or part of the same set. Notably, the manifold might be contorted somehow in the higher-dimensional space (e.g., perhaps the surface of the glass is warped into a bowl shape instead of a plate shape), but the manifold is still basically low-dimensional. Especially in high-dimensional space, this manifold could take many different forms and shapes, but because we live in a three-dimensional world, it's difficult to imagine examples that have more than three dimensions. Just as a sample, though, consider these examples : a piece of glass (planar, two-dimensional) in physical space (three-dimensional) a single thread (one-dimensional) in a piece of fabric (two-dimensional) a piece of fabric (two-dimensional) crumpled up in the washing machine (three-dimensional) Common examples of manifolds in machine learning (or at least sets that are hypothesized to live along low-dimensional manifolds) include : images of natural scenes (typically you do not see images of white noise, for instance, meaning that "natural" images do not occupy the entire space of possible pixel configurations) natural sounds (similar argument) human movements (the human body has hundreds of degrees of freedom, but movements appear to live in a space that can be represented effectively using ~10 dimensions) Learning the manifold The manifold assumption in machine learning is that, instead of assuming that data in the world could come from every part of the possible space (e.g., the space of all possible 1-megapixel images, including white noise), it makes more sense to assume that training data come from relatively low-dimensional manifolds (like the glass plate with the seeds). Then learning the structure of the manifold becomes an important task; additionally, this learning task seems to be possible without the use of labeled training data. There are many, many different ways of learning the structure of a low-dimensional manifold. One of the most widely used approaches is PCA, which assumes that the manifold consists of a single ellipsoidal "blob" like a pancake or cigar shape, embedded in a higher-dimensional space. More complicated techniques like isomap, ICA, or sparse coding relax some of these assumptions in various ways. Semi-supervised learning The reason the manifold assumption is important in semi-supervised learning is two-fold. For many realistic tasks (e.g., determining whether the pixels in an image show a 4 or a 5), there is much more data available in the world without labels (e.g., images that might have digits in them) than with labels (e.g., images that are explicitly labeled "4" or "5"). In addition, there are many orders of magnitude more information available in the pixels of the images than there are in the labels of the images that have labels. But, like I described above, natural images aren't actually sampled from the uniform distribution over pixel configurations, so it seems likely that there is some manifold that captures the structure of natural images. But if we assume further that the images containing 4s all lie on their own manifold, while the images containing 5s likewise lie on a different but nearby manifold, then we can try to develop representations for each of these manifolds using just the pixel data, hoping that the different manifolds will be represented using different learned features of the data. Then, later, when we have a few bits of label data available, we can use those bits to simply apply labels to the already-identified manifolds. Most of this explanation comes from work in the deep and feature learning literature. Yoshua Bengio and Yann LeCun -- see the Energy Based Learning Tutorial have particularly accessible arguments in this area. | {
"source": [
"https://stats.stackexchange.com/questions/66939",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12329/"
]
} |
67,204 | I have a feeling this may have been asked elsewhere, but not really with the type of basic description I need. I know non-parametric relies on the median instead of the mean to compare... something. I also believe it relies on "degrees of freedom"(?) instead of standard deviation. Correct me if I'm wrong, though. I've done pretty good research, or so I'd thought, trying to understand the concept, what the workings are behind it, what the test results really mean, and/or what to even do with the test results; however, no one seems to ever venture into that area. For the sake of simplicity let's stick with the Mann-Whitney U-test, which I've noticed is quite popular (and also seemingly misused and overused too in order to force one's "square model into a circle hole"). If you'd like to describe the other tests as well feel free, although I feel once I understand one, I can understand the others in an analogous way towards various t-tests etc. Let's say I run a non-parametric test with my data and I get this result back: 2 Sample Mann-Whitney - Customer Type
Test Information
H0: Median Difference = 0
Ha: Median Difference ≠ 0
Size of Customer Large Small
Count 45 55
Median 2 2
Mann-Whitney Statistic: 2162.00
p-value (2-sided, adjusted for ties): 0.4156 I'm familiar with other methods, but what is different here? Should we want the p-value to be lower than .05? What does the "Mann-Whitney statistic" mean? Is there any use for it? Does this information here just verify or not verify that a particular source of data I have should or should not be used? I have a reasonable amount of experience with regression and the basics, but am very curious about this "special" non-parametric stuff - which I know will have it's own shortcomings. Just imagine I'm a fifth grader and see if you can explain it to me. | I know non-parametric relies on the median instead of the mean Hardly any nonparametric tests actually "rely on" medians in this sense. I can only think of a couple... and the only one I expect you'd be likely to have even heard of would be the sign test. to compare...something. If they relied on medians, presumably it would be to compare medians. But - in spite of what a number of sources try to tell you - tests like the signed rank test, or the Wilcoxon-Mann-Whitney or the Kruskal-Wallis are not really a test of medians at all; if you make some additional assumptions, you can regard the Wilcoxon-Mann-Whitney and the Kruskal-Wallis as tests of medians, but under the same assumptions (as long as the distributional means exist) you could equally regard them as a test of means. The actual location-estimate relevant to the Signed Rank test is the median of pairwise averages within-sample (over $\frac12 n(n+1)$ pairs including self-pairs), the one for the Wilcoxon-Mann-Whitney is the median of pairwise differences across-samples. I also believe it relies on "degrees of freedom?" instead of standard deviation. Correct me if I'm wrong though. Most nonparametric tests don't have 'degrees of freedom' in the specific sense that the chi-squared or the t-test of the F-test do (each of which has to do with the number of degrees of freedom in an estimate of variance), though the distribution of many change with sample size and you might regard that as somewhat akin to degrees of freedom in the sense that the tables change with sample size. The samples do of course retain their properties and have n degrees of freedom in that sense but the degrees of freedom in the distribution of a test statistic is not typically something we're concerned with. It can happen that you have something more like degrees of freedom - for example, you could certainly make an argument that the Kruskal-Wallis does have degrees of freedom in basically the same sense that a chi-square does, but it's usually not looked at that way (for example, if someone's talking about the degrees of freedom of a Kruskal-Wallis, they will nearly always mean the d.f. of the chi-square approximation to the distribution of the statistic). A good discussion of degrees of freedom may be found here / I've done pretty good research, or so I've thought, trying to understand the concept, what the workings are behind it, what the test results really mean, and/or what to even do with the test results; however no one seems to ever venture into that area. I'm not sure what you mean by this. I could suggest some books, like Conover's Practical Nonparametric Statistics , and if you can get it, Neave and Worthington's book ( Distribution-Free Tests ), but there are
many others - Marascuilo & McSweeney, Hollander & Wolfe, or Daniel's book for example. I suggest you read at least 3 or 4 of the ones that speak to you best, preferably ones that explain things as differently as possible (this would mean at least reading a little of perhaps 6 or 7 books to find say 3 that suit). For the sake of simplicity lets stick with the Mann Whitney U test, which I've noticed is quite popular It is, which is what puzzled me about your statement "no one seems to ever venture into that area" - many people who use these tests do 'venture into the area' you were talking about. - and also seemingly misused and overused I'd say nonparametric tests are generally underused if anything (including the Wilcoxon-Mann-Whitney) -- most especially permutation/randomization tests, though I wouldn't necessarily dispute that they're frequently misused (but so are parametric tests, even more so). Let's say I run a non-parametric test with my data and I get this result back: [snip...] I'm familiar with other methods, but what is different here? Which other methods do you mean? What do you want me to compare this to? Edit: You mention regression later; I assume then that you are familiar with a two-sample t-test (since it's really a special case of regression). Under the assumptions for the ordinary two-sample t-test, the null hypothesis has that the two populations are identical, against the alternative that one of the distributions has shifted. If you look at the first of the two sets of hypotheses for the Wilcoxon-Mann-Whitney below, the basic thing being tested there is almost identical; it's just that the t-test is based on assuming the samples come from identical normal distributions (apart from possible location-shift). If the null hypothesis is true, and the accompanying assumptions are true, the test statistic has a t-distribution. If the alternative hypothesis is true, then the test-statistic becomes more likely to take values that don't look consistent with the null hypothesis but do look consistent with the alternative -- we focus on the most unusual, or extreme outcomes (the ones most consistent with the alternative) - if they occur, we conclude that the samples we got would not have occurred by chance when the null was true (they could do, but the probability of a result at least that much consistent with the alternative is so low that we consider the alternative hypothesis a better explanation for what we observe than "the null hypothesis along with the operation of chance"). The situation is very similar with the Wilcoxon-Mann-Whitney, but it measures the deviation from the null somewhat differently. In fact, when the assumptions of the t-test are true*, it's almost as good as the best possible test (which is the t-test). *(which in practice is never, though that's not really as much of a problem as it sounds) Indeed, it's possible to consider the Wilcoxon-Mann-Whitney as effectively a "t-test" performed on the ranks of the data - though then it doesn't have a t-distribution; the statistic is a monotonic function of a two-sample t-statistic computed on the ranks of the data, so it induces the same ordering on the sample space (that is a "t-test" on the ranks - appropriately performed - would generate the same p-values as a Wilcoxon-Mann-Whitney), so it rejects exactly the same cases. [You'd think that just using the ranks would be throwing away a lot of information, but when the data are drawn from normal populations with the same variance, almost all the information about location-shift is in the patterns of the ranks. The actual data values (conditional on their ranks) add very little additional information to that. If you go heavier-tailed than normal, it's not long before the Wilcoxon-Mann-Whitney test has better power, as well as retaining its nominal significance level, so that 'extra' information above the ranks eventually becomes not just uninformative but in some sense, misleading. However, near-symmetric heavy-tailedness is a rare situation; what you often see in practice is skewness.] The basic ideas are quite similar, the p-values have the same interpretation (the probability of a result as, or more extreme, if the null hypothesis were true) -- right down to the interpretation of a location-shift, if you make the requisite assumptions (see the discussion of the hypotheses near the end of this post). If I did the same simulation as in the plots above for the t-test, the plots would look very similar - the scale on the x- and y-axes would look different, but the basic appearance would be similar. Should we want the p-value to be lower than .05? You shouldn't "want" anything there. The idea is to find out if the samples are more different (in a location-sense) than can be explained by chance, not to 'wish' a particular outcome. If I say "Can you go see what color Raj's car is please?", if I want an unbiased assessment of it I don't want you to be going "Man, I really, really hope it's blue! It just has to be blue". Best to just see what the situation is, rather than to go in with some 'I need it to be something'. If your chosen significance level is 0.05, then you'll reject the null hypothesis when the p-value is ≤ 0.05. But failure to reject when you have a big enough sample size to nearly always detect relevant effect-sizes is at least as interesting, because it says that any differences that exist are small. What does the "mann whitley" number mean? The Mann-Whitney statistic . It's really only meaningful in comparison with the distribution of values it can take when the null hypothesis is true (see the above diagram), and that depends on which of several particular definitions any particular program might use. Is there any use for it? Usually you don't care about the exact value as such, but where it lies in the null-distribution (whether it's more or less typical of the values you should see when the null hypothesis is true, or whether it's more extreme) (Edit: You can obtain or work out some directly informative quantities when doing such a test - like the location shift or $P(X<Y)$ discussed below, and indeed you can work out the second one fairly directly from the statistic, but the statistic alone isn't a very informative number) Does this data here just verify or not verify that a particular source of data I have should or should not be used? This test doesn't say anything about "a particular source of data I have should or should not be used". See my discussion of the two ways of looking at the WMW hypotheses below. I have a reasonable amount of experience with regression and the basics, but am very curious about this "special" non-parametric stuff There's nothing particularly special about nonparametric tests (I'd say the 'standard' ones are in many ways even more basic than the typical parametric tests) -- as long as you actually understand hypothesis testing. That's probably a topic for another question, however. There are two main ways to look at the Wilcoxon-Mann-Whitney hypothesis test. i) One is to say "I'm interested in location-shift - that is that under the null hypothesis, the two populations have the same (continuous) distribution , against the alternative that one is 'shifted' up or down relative to the other" The Wilcoxon-Mann-Whitney works very well if you make this assumption (that your alternative is just a location shift) In this case, the Wilcoxon-Mann-Whitney actually is a test for medians ... but equally it's a test for means, or indeed any other location-equivariant statistic (90th percentiles, for example, or trimmed means, or any number of other things), since they're all affected the same way by location-shift. The nice thing about this is that it's very easily interpretable -- and it's easy to generate a confidence interval for this location-shift. However, the Wilcoxon-Mann-Whitney test is sensitive to other kinds of difference than a location shift. ii) The other is to take the fully general approach. You can characterize this as a test for the probability that a random value from population 1 is less than a random value from population 2 (and indeed, you can turn your Wilcoxon-Mann-Whitney statistic into a direct estimate of that probability, if you're so inclined; the Mann&Whitney formulation in terms of U-statistics counts the number of times one exceeds the other in the samples, you only need scale that to achieve an estimate of the probability); the null is that the population probability is $\frac{1}{2}$ , against the alternative that it differs from $\frac{1}{2}$ . However , while it can work okay in this situation, the test is formulated on the assumption of exchangability under the null. Among other things that would require that in the null case the two distributions are the same. If we don't have that, and are instead are in a slightly different situation like the one pictured above, we won't typically have a test with significance level $\alpha$ . In the pictured case it would likely be a bit lower. So while it "works" in the sense that it tends not to reject when $H_0$ is true and tends to reject more when $H_0$ is false, you want the distributions to be pretty close to identical under the null or the test doesn't behave the way we would expect it to. | {
"source": [
"https://stats.stackexchange.com/questions/67204",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28928/"
]
} |
67,443 | I know that the beta distribution is conjugate to the binomial. But what is the conjugate prior of the beta? Thank you. | It seems that you already gave up on conjugacy. Just for the record, one thing that I've seen people doing (but don't remember exactly where, sorry) is a reparameterization like this. If $X_1,\dots,X_n$ are conditionally iid, given $\alpha,\beta$, such that $X_i\mid\alpha,\beta\sim\mathrm{Beta}(\alpha,\beta)$, remember that
$$
\mathbb{E}[X_i\mid\alpha,\beta]=\frac{\alpha}{\alpha+\beta} =: \mu
$$
and
$$
\mathbb{Var}[X_i\mid\alpha,\beta] = \frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} =: \sigma^2 \, .
$$
Hence, you may reparameterize the likelihood in terms of $\mu$ and $\sigma^2$ and use as a prior
$$
\sigma^2\mid\mu \sim \mathrm{U}[0,\mu(1-\mu)] \qquad \qquad \mu\sim\mathrm{U}[0,1] \, .
$$
Now you're ready to compute the posterior and explore it by your favorite computational method. | {
"source": [
"https://stats.stackexchange.com/questions/67443",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7616/"
]
} |
67,547 | The gamma distribution can take on a pretty wide range of shapes, and given the link between the mean and the variance through its two parameters, it seems suited to dealing with heteroskedasticity in non-negative data, in a way that log-transformed OLS can't do without either WLS or some sort of heteroskedasticity-consistent VCV estimator. I would use it more for routine non-negative data modeling, but I don't know anyone else that uses it, I haven't learned it in a formal classroom setting, and the literature that I read never uses it. Whenever I Google something like "practical uses of gamma GLM", I come up with advice to use it for waiting times between Poisson events. OK. But that seems restrictive and can't be its only use. Naively, it seems like the gamma GLM is a relatively assumption-light means of modeling non-negative data, given gamma's flexibility. Of course you need to check Q-Q plots and residual plots like any model. But are there any serious drawbacks that I am missing? Beyond communication to people who "just run OLS"? | The gamma has a property shared by the lognormal; namely that when the shape parameter is held constant while the scale parameter is varied (as is usually done when using either for models), the variance is proportional to mean-squared (constant coefficient of variation). Something approximate to this occurs fairly often with financial data, or indeed, with many other kinds of data. As a result it's often suitable for data that are continuous, positive, right-skew and where variance is near-constant on the log-scale, though there are a number of other well-known (and often fairly readily available) choices with those properties. Further, it's common to fit a log-link with the gamma GLM (it's relatively more rare to use the natural link). What makes it slightly different from fitting a normal linear model to the logs of the data is that on the log scale the gamma is left skew to varying degrees while the normal (the log of a lognormal) is symmetric. This makes it (the gamma) useful in a variety of situations. I've seen practical uses for gamma GLMs discussed (with real data examples) in (off the top of my head) de Jong & Heller and Frees as well as numerous papers; I've also seen applications in other areas. Oh, and if I remember right, Venables and Ripley's MASS uses it on school absenteeism (the quine data; Edit: turns out it's actually in Statistics Complements to MASS , see p11, the 14th page of the pdf, it has a log link but there's a small shift of the DV). Uh, and McCullagh and Nelder did a blood clotting example, though perhaps it may have been natural link. Then there's Faraway's book where he did a car insurance example and a semiconductor manufacturing data example. There are some advantages and some disadvantages to choosing either of the two options. Since these days both are easy to fit; it's generally a matter of choosing what's most suitable. It's far from the only option; for example, there's also inverse Gaussian GLMs, which are more skew/heavier tailed (and even more heteroskedastic) than either gamma or lognormal. As for drawbacks, it's harder to do prediction intervals. Some diagnostic displays are harder to interpret. Computing expectations on the scale of the linear predictor (generally the log-scale) is harder than for the equivalent lognormal model. Hypothesis tests and intervals are generally asymptotic. These are often relatively minor issues. It has some advantages over log-link lognormal regression (taking logs and fitting an ordinary linear regression model); one is that mean prediction is easy. | {
"source": [
"https://stats.stackexchange.com/questions/67547",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/17359/"
]
} |
67,911 | Assume I have a distribution governing the possible outcome from a single random variable X.
This is something like [0.1, 0.4, 0.2, 0.3] for X being a value of either 1, 2, 3, 4. Is it possible to sample from this distribution, i.e. generate pseudo random numbers upon each of the possible outcomes given the probability of that outcome. So if I wanted to know what the probability of getting a 2 is, the sampling operation may return 0.34 or something like that. The reason I ask is that I'm trying to implement an action selection policy for a reinforcement learning method based on a research paper. From what I gather from the paper, the author is able to sample the distribution by "mapping the uniform distribution U[0,1] through cumulative probability density functions obtained by adaptive numerical integration". From this he then samples the transition probabilities for each trial... I would be grateful for any info on this... Thanks in advance | Sure. Here's an R function that will sample from that distribution n times, with replacement: sampleDist = function(n) {
sample(x = c(1,2,3,4), n, replace = T, prob = c(0.1, 0.4, 0.2, 0.3))
}
# > sampleDist(10)
# [1] 4 2 2 2 2 2 4 1 2 2 If you want to go a little lower level, you can see the actual algorithm used by checking out the R source (written in C): /* Unequal probability sampling; with-replacement case
* n are the lengths of p and perm. p contains probabilities, perm
* contains the actual outcomes, and ans contains an array of values
* that were sampled.
*/
static void ProbSampleReplace(int n, double *p, int *perm, int nans, int *ans)
{
double rU;
int i, j;
int nm1 = n - 1;
/* record element identities */
for (i = 0; i < n; i++)
perm[i] = i + 1;
/* sort the probabilities into descending order */
revsort(p, perm, n);
/* compute cumulative probabilities */
for (i = 1 ; i < n; i++)
p[i] += p[i - 1];
/* compute the sample */
for (i = 0; i < nans; i++) {
rU = unif_rand();
for (j = 0; j < nm1; j++) {
if (rU <= p[j])
break;
}
ans[i] = perm[j];
}
} | {
"source": [
"https://stats.stackexchange.com/questions/67911",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29359/"
]
} |
68,080 | Ok, this is a quite basic question, but I am a little bit confused. In my thesis I write: The standard errors can be found by calculating the inverse of the square root of the diagonal elements of the (observed) Fisher Information matrix: \begin{align*}
s_{\hat{\mu},\hat{\sigma}^2}=\frac{1}{\sqrt{\mathbf{I}(\hat{\mu},\hat{\sigma}^2)}}
\end{align*}
Since the optimization command in R minimizes $-\log\mathcal{L}$ the (observed) Fisher Information matrix can be found by calculating the inverse of the Hessian:
\begin{align*}
\mathbf{I}(\hat{\mu},\hat{\sigma}^2)=\mathbf{H}^{-1}
\end{align*} My main question: Is this correct what I am saying ? I am a little bit confused, because in this source on page 7 it says: the Information matrix is the negative of the expected value of the
Hessian matrix (So no inverse of the Hessian.) Whereas in this source on page 7 (footnote 5) it says: The observed Fisher information is equal to $(-H)^{-1}$. (So here is the inverse.) I am aware of the minus sign and when to use it and when not, but why is there a difference in taking the inverse or not? | Yudi Pawitan writes in his book In All Likelihood that the second derivative of the log-likelihood evaluated at the maximum likelihood estimates (MLE) is the observed Fisher information (see also this document , page 1). This is exactly what most optimization algorithms like optim in R return: the Hessian evaluated at the MLE. When the negative log-likelihood is minimized, the negative Hessian is returned. As you correctly point out, the estimated standard errors of the MLE are the square roots of the diagonal elements of the inverse of the observed Fisher information matrix. In other words: The square roots of the diagonal elements of the inverse of the Hessian (or the negative Hessian) are the estimated standard errors. Summary The negative Hessian evaluated at the MLE is the same as the observed Fisher information matrix evaluated at the MLE. Regarding your main question: No, it's not correct that the
observed Fisher information can be found by inverting the (negative)
Hessian. Regarding your second question: The inverse of the (negative) Hessian is an estimator of the asymptotic covariance matrix. Hence, the square roots of the diagonal elements of covariance matrix are estimators of the standard errors. I think the second document you link to got it wrong. Formally Let $l(\theta)$ be a log-likelihood function. The Fisher information matrix $\mathbf{I}(\theta)$ is a symmetrical $(p\times p)$ matrix containing the entries: $$
\mathbf{I}(\theta)=-\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}l(\theta),~~~~ 1\leq i, j\leq p
$$ The observed Fisher information matrix is simply $\mathbf{I}(\hat{\theta}_{\mathrm{ML}})$ , the information matrix evaluated at the maximum likelihood estimates (MLE). The Hessian is defined as: $$
\mathbf{H}(\theta)=\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}l(\theta),~~~~ 1\leq i, j\leq p
$$ It is nothing else but the matrix of second derivatives of the likelihood function with respect to the parameters. It follows that if you minimize the negative log-likelihood, the returned Hessian is the equivalent of the observed Fisher information matrix whereas in the case that you maximize the log-likelihood, then the negative Hessian is the observed information matrix. Further, the inverse of the Fisher information matrix is an estimator of the asymptotic covariance matrix: $$
\mathrm{Var}(\hat{\theta}_{\mathrm{ML}})=[\mathbf{I}(\hat{\theta}_{\mathrm{ML}})]^{-1}
$$ The standard errors are then the square roots of the diagonal elements of the covariance matrix.
For the asymptotic distribution of a maximum likelihood estimate, we can write $$
\hat{\theta}_{\mathrm{ML}}\stackrel{a}{\sim}\mathcal{N}\left(\theta_{0}, [\mathbf{I}(\hat{\theta}_{\mathrm{ML}})]^{-1}\right)
$$ where $\theta_{0}$ denotes the true parameter value. Hence, the estimated standard error of the maximum likelihood estimates is given by: $$
\mathrm{SE}(\hat{\theta}_{\mathrm{ML}})=\frac{1}{\sqrt{\mathbf{I}(\hat{\theta}_{\mathrm{ML}})}}
$$ | {
"source": [
"https://stats.stackexchange.com/questions/68080",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25675/"
]
} |
68,151 | I am reading a book on linear regression and have some trouble understanding the variance-covariance matrix of $\mathbf{b}$: The diagonal items are easy enough, but the off-diagonal ones are a bit more difficult, what puzzles me is that
$$
\sigma(b_0, b_1) = E(b_0 b_1) - E(b_0)E(b_1) = E(b_0 b_1) - \beta_0 \beta_1
$$ but there is no trace of $\beta_0$ and $\beta_1$ here. | This is actually a cool question that challenges your basic understanding of a regression. First take out any initial confusion about notation. We are looking at the regression: $$y=b_0+b_1x+\hat{u}$$ where $b_0$ and $b_1$ are the estimators of the true $\beta_0$ and $\beta_1$ , and $\hat{u}$ are the residuals of the regression. Note that the underlying true and unboserved regression is thus denoted as: $$y=\beta_0+\beta_1x+u$$ With the expectation of $E[u]=0$ and variance $E[u^2]=\sigma^2$ . Some books denote $b$ as $\hat{\beta}$ and we adapt this convention here. We also make use the matrix notation, where b is the 2x1 vector that holds the estimators of $\beta=[\beta_0, \beta_1]'$ , namely $b=[b_0, b_1]'$ . (Also for the sake of clarity I treat X as fixed in the following calculations.) Now to your question. Your formula for the covariance is indeed correct, that is: $$\sigma(b_0, b_1) = E(b_0 b_1) - E(b_0)E(b_1) = E(b_0 b_1) - \beta_0 \beta_1
$$ I think you want to know how comes we have the true unobserved coefficients $\beta_0, \beta_1$ in this formula? They actually get cancelled out if we take it a step further by expanding the formula. To see this, note that the population variance of the estimator is given by: $$Var(\hat\beta)=\sigma^2(X'X)^{-1}$$ This matrix holds the variances in the diagonal elements and covariances in the off-diagonal elements. To arrive to the above formula, let's generalize your claim by using matrix notation. Let us therefore denote variance with $Var[\cdot]$ and expectation with $E[\cdot]$ . $$Var[b]=E[b^2]-E[b]E[b']$$ Essentially we have the general variance formula, just using matrix notation. The equation resolves when substituting in the standard expression for the estimator $b=(X'X)^{-1}X'y$ . Also assume $E[b]=\beta$ being an unbiased estimator. Hence, we obtain: $$E[((X'X)^{-1}X'y)^2] - \underset{2 \times 2}{\beta^2}$$ Note that we have on the right hand side $\beta^2$ - 2x2 matrix, namely $bb'$ , but you may at this point already guess what will happen with this term shortly. Replacing $y$ with our expression for the true underlying data generating process above, we have: \begin{align*}
E\Big[\Big((X'X)^{-1}X'y\Big)^2\Big] - \beta^2
&= E\Big[\Big((X'X)^{-1}X'(X\beta+u)\Big)^2\Big]-\beta^2 \\
&= E\Big[\Big(\underbrace{(X'X)^{-1}X'X}_{=I}\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\
&= E\Big[\Big(\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\
&= \beta^2+E\Big[\Big(X'X)^{-1}X'u\Big)^2\Big]-\beta^2
\end{align*} since $E[u]=0$ . Furthermore, the quadratic $\beta^2$ term cancels out as anticipated. Thus we have: $$Var[b]=((X'X)^{-1}X')^2E[u^2]$$ By linearity of expectations. Note that by assumption $E[u^2]=\sigma^2$ and $((X'X)^{-1}X')^2=(X'X)^{-1}X'X(X'X)'^{-1}=(X'X)^{-1}$ since $X'X$ is a $K\times K$ symetric matrix and thus the same as its transpose. Finally we arrive at $$Var[b]=\sigma^2(X'X)^{-1}$$ Now that we got rid of all $\beta$ terms. Intuitively, the variance of the estimator is independent of the value of true underlying coefficient, as this is not a random variable per se. The result is valid for all individual elements in the variance covariance matrix as shown in the book thus also valid for the off diagonal elements as well with $\beta_0\beta_1$ to cancel out respectively. The only problem was that you had applied the general formula for the variance which does not reflect this cancellation at first. Ultimately, the variance of the coefficients reduces to $\sigma^2(X'X)^{-1}$ and independent of $\beta$ . But what does this mean? (I believe you asked also for a more general understanding of the general covariance matrix) Look at the formula in the book. It simply asserts that the variance of the estimator increases for when the true underlying error term is more noisy ( $\sigma^2$ increases), but decreases for when the spread of X increases. Because having more observations spread around the true value, lets you in general build an estimator that is more accurate and thus closer to the true $\beta$ . On the other hand, the covariance terms on the off-diagonal become practically relevant in hypothesis testing of joint hypotheses such as $b_0=b_1=0$ . Other than that they are a bit of a fudge, really. Hope this clarifies all questions. | {
"source": [
"https://stats.stackexchange.com/questions/68151",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7787/"
]
} |
68,157 | I am trying to use the glmnet MATLAB package to train my elastic net model on some huge data. My features are of size 13200, and I have around 6000 samples of these. I directly tried to use lassoglm in MATLAB with these features and corresponding target taking cross validation to just 3 folds and alpha = 0.5. It's already 6 hours and it hasn't finished. I have to do it for several others as well. Any suggestions what I should do? | This is actually a cool question that challenges your basic understanding of a regression. First take out any initial confusion about notation. We are looking at the regression: $$y=b_0+b_1x+\hat{u}$$ where $b_0$ and $b_1$ are the estimators of the true $\beta_0$ and $\beta_1$ , and $\hat{u}$ are the residuals of the regression. Note that the underlying true and unboserved regression is thus denoted as: $$y=\beta_0+\beta_1x+u$$ With the expectation of $E[u]=0$ and variance $E[u^2]=\sigma^2$ . Some books denote $b$ as $\hat{\beta}$ and we adapt this convention here. We also make use the matrix notation, where b is the 2x1 vector that holds the estimators of $\beta=[\beta_0, \beta_1]'$ , namely $b=[b_0, b_1]'$ . (Also for the sake of clarity I treat X as fixed in the following calculations.) Now to your question. Your formula for the covariance is indeed correct, that is: $$\sigma(b_0, b_1) = E(b_0 b_1) - E(b_0)E(b_1) = E(b_0 b_1) - \beta_0 \beta_1
$$ I think you want to know how comes we have the true unobserved coefficients $\beta_0, \beta_1$ in this formula? They actually get cancelled out if we take it a step further by expanding the formula. To see this, note that the population variance of the estimator is given by: $$Var(\hat\beta)=\sigma^2(X'X)^{-1}$$ This matrix holds the variances in the diagonal elements and covariances in the off-diagonal elements. To arrive to the above formula, let's generalize your claim by using matrix notation. Let us therefore denote variance with $Var[\cdot]$ and expectation with $E[\cdot]$ . $$Var[b]=E[b^2]-E[b]E[b']$$ Essentially we have the general variance formula, just using matrix notation. The equation resolves when substituting in the standard expression for the estimator $b=(X'X)^{-1}X'y$ . Also assume $E[b]=\beta$ being an unbiased estimator. Hence, we obtain: $$E[((X'X)^{-1}X'y)^2] - \underset{2 \times 2}{\beta^2}$$ Note that we have on the right hand side $\beta^2$ - 2x2 matrix, namely $bb'$ , but you may at this point already guess what will happen with this term shortly. Replacing $y$ with our expression for the true underlying data generating process above, we have: \begin{align*}
E\Big[\Big((X'X)^{-1}X'y\Big)^2\Big] - \beta^2
&= E\Big[\Big((X'X)^{-1}X'(X\beta+u)\Big)^2\Big]-\beta^2 \\
&= E\Big[\Big(\underbrace{(X'X)^{-1}X'X}_{=I}\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\
&= E\Big[\Big(\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\
&= \beta^2+E\Big[\Big(X'X)^{-1}X'u\Big)^2\Big]-\beta^2
\end{align*} since $E[u]=0$ . Furthermore, the quadratic $\beta^2$ term cancels out as anticipated. Thus we have: $$Var[b]=((X'X)^{-1}X')^2E[u^2]$$ By linearity of expectations. Note that by assumption $E[u^2]=\sigma^2$ and $((X'X)^{-1}X')^2=(X'X)^{-1}X'X(X'X)'^{-1}=(X'X)^{-1}$ since $X'X$ is a $K\times K$ symetric matrix and thus the same as its transpose. Finally we arrive at $$Var[b]=\sigma^2(X'X)^{-1}$$ Now that we got rid of all $\beta$ terms. Intuitively, the variance of the estimator is independent of the value of true underlying coefficient, as this is not a random variable per se. The result is valid for all individual elements in the variance covariance matrix as shown in the book thus also valid for the off diagonal elements as well with $\beta_0\beta_1$ to cancel out respectively. The only problem was that you had applied the general formula for the variance which does not reflect this cancellation at first. Ultimately, the variance of the coefficients reduces to $\sigma^2(X'X)^{-1}$ and independent of $\beta$ . But what does this mean? (I believe you asked also for a more general understanding of the general covariance matrix) Look at the formula in the book. It simply asserts that the variance of the estimator increases for when the true underlying error term is more noisy ( $\sigma^2$ increases), but decreases for when the spread of X increases. Because having more observations spread around the true value, lets you in general build an estimator that is more accurate and thus closer to the true $\beta$ . On the other hand, the covariance terms on the off-diagonal become practically relevant in hypothesis testing of joint hypotheses such as $b_0=b_1=0$ . Other than that they are a bit of a fudge, really. Hope this clarifies all questions. | {
"source": [
"https://stats.stackexchange.com/questions/68157",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12329/"
]
} |
68,834 | I'm wondering what the value is in taking a continuous predictor variable and breaking it up (e.g., into quintiles), before using it in a model. It seems to me that by binning the variable we lose information. Is this just so we can model non-linear effects? If we kept the variable continuous and it wasn't really a straight linear relationship would we need to come up with some kind of curve to best fit the data? | You're right on both counts. See Frank Harrell's page here for a long list of problems with binning continuous variables. If you use a few bins you throw away a lot of information in the predictors; if you use many you tend to fit wiggles in what should be a smooth, if not linear, relationship, & use up a lot of degrees of freedom. Generally better to use polynomials ( $x + x^2 + \ldots$ ) or splines (piecewise polynomials that join smoothly) for the predictors. Binning's really only a good idea when you'd expect a discontinuity in the response at the cut-points—say the temperature something boils at, or the legal age for driving–, & when the response is flat between them.. The value?—well, it's a quick & easy way to take curvature into account without having to think about it, & the model may well be good enough for what you're using it for. It tends to work all right when you've lots of data compared to the number of predictors, each predictor is split into plenty of categories; in this case within each predictor band the range of response is small & the average response is precisely determined. [Edit in response to comments: Sometimes there are standard cut-offs used within a field for a continuous variable: e.g. in medicine blood pressure measurements may be categorized as low, medium or high. There may be many good reasons for using such cut-offs when you present or apply a model. In particular, decision rules are often based on less information than goes into a model, & may need to be simple to apply. But it doesn't follow that these cut-offs are appropriate for binning the predictors when you fit the model. Suppose some response varies continuously with blood pressure. If you define a high blood pressure group as a predictor in your study, the effect you're estimating is the average response over the particular blood-pressures of the individuals in that group. It's not an estimate of the average response of people with high blood pressure in the general population, or of people in the high blood pressure group in another study, unless you take specific measures to make it so. If the distribution of blood pressure in the general population is known, as I imagine it is, you'll do better to calculate the average response of people with high blood pressure in the general population based on predictions from the model with blood pressure as a continuous variable. Crude binning makes your model only approximately generalizable. In general, if you have questions about the behaviour of the response between cut-offs, fit the best model you can first, & then use it to answer them.] [With regard to presentation; I think this is a red herring: (1) Ease of presentation doesn't justify bad modelling decisions. (And in the cases where binning is a good modelling decision, it doesn't need additional justification.) Surely this is self-evident. No-one ever recommends taking an important interaction out of a model because it's hard to present. (2) Whatever kind of model you fit, you can still present its results in terms of categories if you think it will aid interpretation. Though ... (3) You have to be careful to make sure it doesn't aid mis -interpretation, for the reasons given above. (4) It's not in fact difficult to present non-linear responses. Personal opinion, clearly, & audiences differ; but I've never seen a graph of fitted response values versus predictor values puzzle someone just because it's curved. Interactions, logits, random effects, multicollinearity, ...—these are all much harder to explain.] [An additional point brought up by @Roland is the exactness of the measurement of the predictors; he's suggesting, I think, that categorization may be appropriate when they're not especially precise. Common sense might suggest that you don't improve matters by re-stating them even less precisely, & common sense would be right:
MacCallum et al (2002), "On the Practice of Dichotomization of Quantitative Variables", Psychological Methods , 7 , 1, pp17–19.] | {
"source": [
"https://stats.stackexchange.com/questions/68834",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29547/"
]
} |
68,893 | I am a little bit confused about the Area Under Curve (AUC) of ROC and the overall accuracy. Will the AUC be proportional to the overall accuracy? In other words, when we have a larger overall accuracy will we definitely a get larger AUC? Or are they by definition positively correlated? If they are positively correlated, why do we bother reporting both of them in some publications? In real case, I performed some classification task and got the results as follows: classifier A got an accuracy 85% and AUC of 0.98 and classifier B got an accuracy of 93% and AUC of 0.92. Question is, which classifier is better? Is it possible to get results similar to these (or do my results indicate a bug in my implementation)? | AUC (based on ROC) and overall accuracy seems not the same concept. Overall accuracy is based on one specific cutpoint, while ROC tries all of the cutpoint and plots the sensitivity and specificity.
So when we compare the overall accuracy, we are comparing the accuracy based on some cutpoint. The overall accuracy varies from different cutpoint. | {
"source": [
"https://stats.stackexchange.com/questions/68893",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/14826/"
]
} |
69,114 | I have been researching the meaning of positive semi-definite property of correlation or covariance matrices. I am looking for any information on Definition of positive semi-definiteness; Its important properties, practical implications; The consequence of having negative determinant, impact on multivariate analysis or simulation results etc. | The variance of a weighted sum $\sum_i a_i X_i$ of random variables must be nonnegative
for all choices of real numbers $a_i$.
Since the variance can be expressed as
$$\operatorname{var}\left(\sum_i a_i X_i\right) = \sum_i \sum_j a_ia_j \operatorname{cov}(X_i,X_j) = \sum_i \sum_j a_ia_j \Sigma_{i,j},$$
we have that the covariance matrix $\Sigma = [\Sigma_{i,j}]$ must be positive semidefinite (which is sometimes called nonnegative definite). Recall that a matrix $C$ is called
positive semidefinite if and only if $$\sum_i \sum_j a_ia_j C_{i,j} \geq 0 \;\; \forall a_i, a_j \in \mathbb R.$$ | {
"source": [
"https://stats.stackexchange.com/questions/69114",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29872/"
]
} |
69,157 | I'm doing principal component analysis on my dataset and my professor told me that I should normalize the data before doing the analysis. Why? What would happen If I did PCA without normalization? Why do we normalize data in general? Could someone give clear and intuitive example which would demonstrate the consequences of not normalizing the data before analysis? | Normalization is important in PCA since it is a variance maximizing exercise. It projects your original data onto directions which maximize the variance. The first plot below shows the amount of total variance explained in the different principal components wher we have not normalized the data. As you can see, it seems like component one explains most of the variance in the data. If you look at the second picture, we have normalized the data first. Here it is clear that the other components contribute as well. The reason for this is because PCA seeks to maximize the variance of each component. And since the covariance matrix of this particular dataset is: Murder Assault UrbanPop Rape
Murder 18.970465 291.0624 4.386204 22.99141
Assault 291.062367 6945.1657 312.275102 519.26906
UrbanPop 4.386204 312.2751 209.518776 55.76808
Rape 22.991412 519.2691 55.768082 87.72916 From this structure, the PCA will select to project as much as possible in the direction of Assault since that variance is much greater. So for finding features usable for any kind of model, a PCA without normalization would perform worse than one with normalization. | {
"source": [
"https://stats.stackexchange.com/questions/69157",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/18528/"
]
} |
69,205 | I am having some issues with the derivation of the solution for ridge regression. I know the regression solution without the regularization term: $$\beta = (X^TX)^{-1}X^Ty.$$ But after adding the L2 term $\lambda\|\beta\|_2^2$ to the cost function, how come the solution becomes $$\beta = (X^TX + \lambda I)^{-1}X^Ty.$$ | It suffices to modify the loss function by adding the penalty. In matrix terms, the initial quadratic loss function becomes
$$ (Y - X\beta)^{T}(Y-X\beta) + \lambda \beta^T\beta.$$
Deriving with respect to $\beta$ leads to the normal equation
$$ X^{T}Y = \left(X^{T}X + \lambda I\right)\beta $$
which leads to the Ridge estimator. | {
"source": [
"https://stats.stackexchange.com/questions/69205",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12329/"
]
} |
69,210 | Let's say we have a Dirichlet distribution with $K$-dimensional vector parameter $\vec\alpha = [\alpha_1, \alpha_2,...,\alpha_K]$. How can I draw a sample (a $K$-dimensional vector) from this distribution? I need a (possibly) simple explanation. | First, draw $K$ independent random samples $y_1, \ldots, y_K$ from Gamma distributions each with density $$ \textrm{Gamma}(\alpha_i, 1) = \frac{y_i^{\alpha_i-1} \; e^{-y_i}}{\Gamma (\alpha_i)},$$ and then set $$x_i = \frac{y_i}{\sum_{j=1}^K y_j}. $$ Now, $x_1,...,x_K$ will follow a Dirichlet distribution The Wikipedia page on the Dirichlet distribution tells you exactly how to sample from the Dirichlet distribution. Also, in the R library MCMCpack there is a function for sampling random variables from the Dirichlet distribution. | {
"source": [
"https://stats.stackexchange.com/questions/69210",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26658/"
]
} |
69,568 | For the LASSO (and other model selecting procedures) it is crucial to rescale the predictors. The general recommendation I follow is simply to use a 0 mean, 1 standard deviation normalization for continuous variables. But what is there to do with dummies? E.g. some applied examples from the same (excellent) summer school I linked to rescales continuous variables to be between 0 and 1 (not great with outliers though), probably to be comparable to the dummies. But even that does not guarantee that the coefficients should be the same order of magnitude, and thus penalized similarly, the key reason for rescaling, no? | According Tibshirani ( THE LASSO METHOD FOR VARIABLE SELECTION
IN THE COX MODEL, Statistics in Medicine, VOL. 16, 385-395 (1997) ), who literally wrote the book on regularization methods, you should standardize the dummies. However, you then lose the straightforward interpretability of your coefficients. If you don't, your variables are not on an even playing field. You are essentially tipping the scales in favor of your continuous variables (most likely). So, if your primary goal is model selection then this is an egregious error. However, if you are more interested in interpretation then perhaps this isn't the best idea. The recommendation is on page 394: The lasso method requires initial standardization of the regressors, so that the penalization scheme is fair to all regressors. For categorical regressors, one codes the regressor with dummy variables and then standardizes the dummy variables. As pointed out by a referee, however, the relative scaling between continuous and categorical variables in this scheme can be somewhat arbitrary. | {
"source": [
"https://stats.stackexchange.com/questions/69568",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6534/"
]
} |
69,633 | I have two data sets, one from a set of physical observations (temperatures), and one from an ensemble of numerical models. I'm doing a perfect-model analysis, assuming the model ensemble represents a true, independent sample, and checking to see if the observations are drawn from that distribution. The statistic I've calculated is normalised, and should theoretically be a standard normal distribution. Of course it's not perfect, so I want to test for goodness of fit. Using frequentist reasoning, I could calculate a Cramér-von Mises statistic (or Kolmogorov-Smirnov, etc.), or similar, and look up the value in a table to get a p-value, to help me decide how unlikely the value I see is, given the observations are the same as the model. What would the Bayesian equivalent of this process be? That is, how do I quantify the strength of my belief that these two distributions (my calculated statistic and the standard normal) are different? | I would suggest the book Bayesian Data Analysis as a great source for answering this question (in particular chapter 6) and everything I am about to say. But one of the usual ways that Bayesians attack this problem is by using Posterior Predictive P-values (PPPs). Before I jump into how PPPs would solve this problem let me first define the following notation: Let $y$ be the observed data and $\theta$ be the vector of parameters. We define $y^{\text{rep}}$ as the replicated data that could have been observed, or, to think predictively, as the data we would see tomorrow if the experiment that produced $y$ today were replicated with the same model and the same value of $\theta$ that produced the observed data. Note, we will define the distribution of $y^{\text{rep}}$ given the current state of knowledge with the posterior predictive distribution
$$p(y^{\text{rep}}|y)=\int_\Theta p(y^{\text{rep}}|\theta)p(\theta|y)d\theta$$ Now, we can measure the discrepancy between the model and the data by defining test quantities , the aspects of the data we wish to check. A test quantity, or discrepancy measure , $T(y,\theta)$, is a scalar summary of parameters and data that is used as a standard when comparing data to predictive simulations. Test quantities play the role in Bayesian model checking that test statistics play in classical testing. We define the notation $T(y)$ for a test statistic, which is a test quantity that depends only on data; in the Bayesian context, we can generalize test statistics to allow dependence on the model parameters under their posterior distribution. Classically, the p-value for the test statistic $T(y)$ is
$$p_C=\text{Pr}(T(y^{\text{rep}})\geq T(y)|\theta)$$
where the probability is taken over the distribution of $y^{\text{rep}}$ with $\theta$ fixed. From a Bayesian perspective, lack of fit of the data with respect to the posterior predictive distribution can be measured by the tail-area probability, or p-value, of the test quantity, and computed using posterior simulations of $(\theta,y^{\text{rep}})$. In the Bayesian approach, test quantities can be functions of the unknown parameters as well as data because the test quantity is evaluated over draws from the posterior distribution of the unknown parameters. Now, we can define the Bayesian p-value (PPPs) as the probability that the replicated data could be more extreme than the observed data, as measured by the test quantity:
$$p_B=\text{Pr}(T(y^{\text{rep}},\theta)\geq T(y,\theta)|y)$$
where the probability is taken over the posterior distribution of $\theta$ and the posterior predictive distribution of $y^{\text{rep}}$ (that is, the joint distribution, $p(\theta,y^{\text{rep}}|y)$):
$$p_B=\iint_\Theta I_{T(y^{\text{rep}},\theta)\geq T(y|\theta)}p(y^{\text{rep}}|\theta)p(\theta|y)dy^{\text{rep}}d\theta,$$
where $I$ is the indicator function. In practice though we usually compute the posterior predictive distribution using simulations. If we already have, say, $L$ simulations from the posterior distribution of $\theta$, then we can just draw one $y^{\text{rep}}$ from the predictive distribution for each simulated $\theta$; we now have $L$ draws from the joint posterior distribution, $p(y^{\text{rep}},\theta|y)$. The posterior predictive check is the comparison between the realized test quantities $T(y,\theta^l)$ and the predictive test quantities $T(y^{\text{rep}l},\theta^l)$. The estimated p-value is just the proportion of these $L$ simulations for which the test quantity equals or exceeds its realized value; that is, for which $$T(y^{\text{rep}l},\theta^l)\geq T(y,\theta^l)$$ for $l=1,...,L$. In contrast to the classical approach, Bayesian model checking does not require special methods to handle "nuisance parameters." By using posterior simulations, we implicitly average over all the parameters in the model. An additional source, Andrew Gelman also has a very nice paper on PPP's here: http://www.stat.columbia.edu/~gelman/research/unpublished/ppc_understand2.pdf | {
"source": [
"https://stats.stackexchange.com/questions/69633",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9007/"
]
} |
69,898 | I have a data set with tens of thousands of observations of medical cost data. This data is highly skewed to the right and has a lot of zeros. It looks like this for two sets of people (in this case two age bands with > 3000 obs each): Min. 1st Qu. Median Mean 3rd Qu. Max.
0.0 0.0 0.0 4536.0 302.6 395300.0
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.0 0.0 0.0 4964.0 423.8 721700.0 If I perform Welch's t-test on this data I get a result back: Welch Two Sample t-test
data: x and y
t = -0.4777, df = 3366.488, p-value = 0.6329
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-2185.896 1329.358
sample estimates:
mean of x mean of y
4536.186 4964.455 I know its not correct to use a t-test on this data since its so badly non-normal. However, if I use a permutation test for the difference of the means, I get nearly the same p-value all the time (and it gets closer with more iterations). Using perm package in R and permTS with exact Monte Carlo Exact Permutation Test Estimated by Monte Carlo
data: x and y
p-value = 0.6188
alternative hypothesis: true mean x - mean y is not equal to 0
sample estimates:
mean x - mean y
-428.2691
p-value estimated from 500 Monte Carlo replications
99 percent confidence interval on p-value:
0.5117552 0.7277040 Why is the permutation test statistic coming out so close to the t.test value? If I take logs of the data then I get a t.test p-value of 0.28 and the same from the permutation test. I thought the t-test values wold be more garbage than what I am getting here. This is true of many other data sets I have like this and am wondering why the t-test appears to be working when it shouldn't. My concern here is that the individual costs are not i.i.d. There are many sub-groups of people with very different cost distributions (women vs men, chronic conditions etc) that seem to violate the iid requirement for central limit theorem, or should I not worry about that? | Neither the t-test nor the permutation test have much power to identify a difference in means between two such extraordinarily skewed distributions. Thus they both give anodyne p-values indicating no significance at all. The issue is not that they seem to agree; it is that because they have a hard time detecting any difference at all, they simply cannot disagree! For some intuition, consider what would happen if a change in a single value occurred in one dataset. Suppose that the maximum of 721,700 had not occurred in the second data set, for instance. The mean would have dropped by approximately 721700/3000, which is about 240. Yet the difference in the means is only 4964-4536 = 438, not even twice as big. That suggests (although it does not prove) that any comparison of the means would not find the difference significant. We can verify, though, that the t-test is not applicable. Let's generate some datasets with the same statistical characteristics as these. To do so I have created mixtures in which $5/8$ of the data are zeros in any case. The remaining data have a lognormal distribution. The parameters of that distribution are arranged to reproduce the observed means and third quartiles. It turns out in these simulations that the maximum values are not far from the reported maxima, either. Let's replicate the first dataset 10,000 times and track its mean. (The results will be almost the same when we do this for the second dataset.) The histogram of these means estimates the sampling distribution of the mean. The t-test is valid when this distribution is approximately Normal; the extent to which it deviates from Normality indicates the extent to which the Student t distribution will err. So, for reference, I have also drawn (in red) the PDF of the Normal distribution fit to these results. We can't see much detail because there are some whopping big outliers. (That's a manifestation of this sensitivity of the means I mentioned.) There are 123 of them--1.23%--above 10,000. Let's focus on the rest so we can see the detail and because these outliers may result from the assumed lognormality of the distribution, which is not necessarily the case for the original dataset. That is still strongly skewed and deviates visibly from the Normal approximation, providing sufficient explanation for the phenomena recounted in the question. It also gives us a sense of how large a difference in means could be detected by a test: it would have to be around 3000 or more to appear significant. Conversely, the actual difference of 428 might be detected provided you had approximately $(3000/428)^2 = 50$ times as much data (in each group). Given 50 times as much data, I estimate the power to detect this difference at a significance level of 5% would be around 0.4 (which is not good, but at least you would have a chance). Here is the R code that produced these figures. #
# Generate positive random values with a median of 0, given Q3,
# and given mean. Make a proportion 1-e of them true zeros.
#
rskew <- function(n, x.mean, x.q3, e=3/8) {
beta <- qnorm(1 - (1/4)/e)
gamma <- 2*(log(x.q3) - log(x.mean/e))
sigma <- sqrt(beta^2 - gamma) + beta
mu <- log(x.mean/e) - sigma^2/2
m <- floor(n * e)
c(exp(rnorm(m, mu, sigma)), rep(0, n-m))
}
#
# See how closely the summary statistics are reproduced.
# (The quartiles will be close; the maxima not too far off;
# the means may differ a lot, though.)
#
set.seed(23)
x <- rskew(3300, 4536, 302.6)
y <- rskew(3400, 4964, 423.8)
summary(x)
summary(y)
#
# Estimate the sampling distribution of the mean.
#
set.seed(17)
sim.x <- replicate(10^4, mean(rskew(3367, 4536, 302.6)))
hist(sim.x, freq=FALSE, ylim=c(0, dnorm(0, sd=sd(sim.x))))
curve(dnorm(x, mean(sim.x), sd(sim.x)), add=TRUE, col="Red")
hist(sim.x[sim.x < 10000], xlab="x", freq=FALSE)
curve(dnorm(x, mean(sim.x), sd(sim.x)), add=TRUE, col="Red")
#
# Can a t-test detect a difference with more data?
#
set.seed(23)
n.factor <- 50
z <- replicate(10^3, {
x <- rskew(3300*n.factor, 4536, 302.6)
y <- rskew(3400*n.factor, 4964, 423.8)
t.test(x,y)$p.value
})
hist(z)
mean(z < .05) # The estimated power at a 5% significance level | {
"source": [
"https://stats.stackexchange.com/questions/69898",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/21827/"
]
} |
69,907 | I'm using SPSS to try and find a mixed model that adequate explains the data that I have.
Two of the explanatory variables are closely related ('Sample group' and 'individual'), as an individual is only
ever part of one sample group, so I've been nesting them if they are in the same model. I've been using the models AIC score to rank the models in order of explanatory power.
Some of the models use the nested variables, and some of the models only use either 'Sample group' or 'individual'. My question is:
Is it valid to use the AIC to compare between models that use nested variables and those that don't? To clarify by nested variables, I mean that some of the potential variables used in a model are:
1) sample site(individual)
2) sample site
3) individual | Neither the t-test nor the permutation test have much power to identify a difference in means between two such extraordinarily skewed distributions. Thus they both give anodyne p-values indicating no significance at all. The issue is not that they seem to agree; it is that because they have a hard time detecting any difference at all, they simply cannot disagree! For some intuition, consider what would happen if a change in a single value occurred in one dataset. Suppose that the maximum of 721,700 had not occurred in the second data set, for instance. The mean would have dropped by approximately 721700/3000, which is about 240. Yet the difference in the means is only 4964-4536 = 438, not even twice as big. That suggests (although it does not prove) that any comparison of the means would not find the difference significant. We can verify, though, that the t-test is not applicable. Let's generate some datasets with the same statistical characteristics as these. To do so I have created mixtures in which $5/8$ of the data are zeros in any case. The remaining data have a lognormal distribution. The parameters of that distribution are arranged to reproduce the observed means and third quartiles. It turns out in these simulations that the maximum values are not far from the reported maxima, either. Let's replicate the first dataset 10,000 times and track its mean. (The results will be almost the same when we do this for the second dataset.) The histogram of these means estimates the sampling distribution of the mean. The t-test is valid when this distribution is approximately Normal; the extent to which it deviates from Normality indicates the extent to which the Student t distribution will err. So, for reference, I have also drawn (in red) the PDF of the Normal distribution fit to these results. We can't see much detail because there are some whopping big outliers. (That's a manifestation of this sensitivity of the means I mentioned.) There are 123 of them--1.23%--above 10,000. Let's focus on the rest so we can see the detail and because these outliers may result from the assumed lognormality of the distribution, which is not necessarily the case for the original dataset. That is still strongly skewed and deviates visibly from the Normal approximation, providing sufficient explanation for the phenomena recounted in the question. It also gives us a sense of how large a difference in means could be detected by a test: it would have to be around 3000 or more to appear significant. Conversely, the actual difference of 428 might be detected provided you had approximately $(3000/428)^2 = 50$ times as much data (in each group). Given 50 times as much data, I estimate the power to detect this difference at a significance level of 5% would be around 0.4 (which is not good, but at least you would have a chance). Here is the R code that produced these figures. #
# Generate positive random values with a median of 0, given Q3,
# and given mean. Make a proportion 1-e of them true zeros.
#
rskew <- function(n, x.mean, x.q3, e=3/8) {
beta <- qnorm(1 - (1/4)/e)
gamma <- 2*(log(x.q3) - log(x.mean/e))
sigma <- sqrt(beta^2 - gamma) + beta
mu <- log(x.mean/e) - sigma^2/2
m <- floor(n * e)
c(exp(rnorm(m, mu, sigma)), rep(0, n-m))
}
#
# See how closely the summary statistics are reproduced.
# (The quartiles will be close; the maxima not too far off;
# the means may differ a lot, though.)
#
set.seed(23)
x <- rskew(3300, 4536, 302.6)
y <- rskew(3400, 4964, 423.8)
summary(x)
summary(y)
#
# Estimate the sampling distribution of the mean.
#
set.seed(17)
sim.x <- replicate(10^4, mean(rskew(3367, 4536, 302.6)))
hist(sim.x, freq=FALSE, ylim=c(0, dnorm(0, sd=sd(sim.x))))
curve(dnorm(x, mean(sim.x), sd(sim.x)), add=TRUE, col="Red")
hist(sim.x[sim.x < 10000], xlab="x", freq=FALSE)
curve(dnorm(x, mean(sim.x), sd(sim.x)), add=TRUE, col="Red")
#
# Can a t-test detect a difference with more data?
#
set.seed(23)
n.factor <- 50
z <- replicate(10^3, {
x <- rskew(3300*n.factor, 4536, 302.6)
y <- rskew(3400*n.factor, 4964, 423.8)
t.test(x,y)$p.value
})
hist(z)
mean(z < .05) # The estimated power at a 5% significance level | {
"source": [
"https://stats.stackexchange.com/questions/69907",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30239/"
]
} |
69,920 | I discuss estimation of the following $ARX(1)$ model: $Y_t=α+βX_t+u_t$ where $u_t=ρu_{t-1}+ϵ_t$ Substituting the the value of $u_t$ in the first equation into the second, we have that: $Y_t-α- βX_t =ρ(Y_{t-1}-α-βX_{t-1})+ϵ_t$ Rearranging, we have: $Y_t=α(1-ρ)+ρY_{t-1}+ βX_t-ρβX_{t-1}+ϵ_t$ Can we obtain estimates for the parameters in equation 2 using OLS? Well, yes. However, notice that if we did this, we’d be ignoring the fact that the parameters in the model are restricted, in particular the parameter of $X_{t-1}$, $- ρβ$, is the product of the coefficients on $Y_{t-1}$ and $X_{t}$. We should utilise this information. The OLS estimator will be unbiased, consistent, and BLUE (best linear unbiased estimator), but by utilising the information regarding the restriction of parameters, we can find a better estimator than OLS. In other words, the non-linear estimator produced by the Marquardt algorithm will be superior to OLS. Unsurprisingly, EViews estimates all ARMAX models using the Marquardt algorithm. Consider the following 3 situations: $1.$ If $ρ=0$, then our model becomes: $Y_t=α+ βX_t+ϵ_t$ The best thing to do in this case is to ignore $Y_{t-1}$ and $X_{t-1}$, since we know, in theory, they are irrelevant in explaining $Y_{t}$. The Marquardt algorithm is not necessary. $2.$ If $β=0$ and $ρ ≠1$, then our model becom$Y_t=α(1-ρ)+ρY_{t-1}+ϵ_t$ The best thing to do in this case is to ignore $X_t$ and $X_{t-1}$ since we know they are irrelevant variables. $3.$ If, in the population, $X_{t-1}$ is uncorrelated with the other regressors, in particular $X_{t}$, then we can just regress $Y_t$ on a constant, $Y_{t-1}$, $X_{t}$. Again, this is extremely unlikely. Especially so because we are dealing with time series variables that are in all but the most rare circumstances autocorrelated. If, in the sample, $X_{t-1}$ is uncorrelated with the other regressors, a very unlikely scenario, then the OLS estimates of the $α(1-ρ)$, $ρ$ and $β$ when we ignore $X_{t-1}$ are exactly the same as from the Marquardt algorithm, by the Frisch-Waugh-Lovell theorem, as long as the estimate of $ρ$ is not precisely zero (which is, in any case, unlikely). We can obtain the estimate of $α$ from the Marquardt algorithm by simply multiplying the OLS estimate of the constant by 1 minus the OLS estimate of $ρ$. This is not so much a question. I would like your expertise regarding whether you think my assertions are correct. It is very difficult to find answers to this online. It is also quite an interesting problem. Your help is greatly appreciated. Thank you. Christian | Neither the t-test nor the permutation test have much power to identify a difference in means between two such extraordinarily skewed distributions. Thus they both give anodyne p-values indicating no significance at all. The issue is not that they seem to agree; it is that because they have a hard time detecting any difference at all, they simply cannot disagree! For some intuition, consider what would happen if a change in a single value occurred in one dataset. Suppose that the maximum of 721,700 had not occurred in the second data set, for instance. The mean would have dropped by approximately 721700/3000, which is about 240. Yet the difference in the means is only 4964-4536 = 438, not even twice as big. That suggests (although it does not prove) that any comparison of the means would not find the difference significant. We can verify, though, that the t-test is not applicable. Let's generate some datasets with the same statistical characteristics as these. To do so I have created mixtures in which $5/8$ of the data are zeros in any case. The remaining data have a lognormal distribution. The parameters of that distribution are arranged to reproduce the observed means and third quartiles. It turns out in these simulations that the maximum values are not far from the reported maxima, either. Let's replicate the first dataset 10,000 times and track its mean. (The results will be almost the same when we do this for the second dataset.) The histogram of these means estimates the sampling distribution of the mean. The t-test is valid when this distribution is approximately Normal; the extent to which it deviates from Normality indicates the extent to which the Student t distribution will err. So, for reference, I have also drawn (in red) the PDF of the Normal distribution fit to these results. We can't see much detail because there are some whopping big outliers. (That's a manifestation of this sensitivity of the means I mentioned.) There are 123 of them--1.23%--above 10,000. Let's focus on the rest so we can see the detail and because these outliers may result from the assumed lognormality of the distribution, which is not necessarily the case for the original dataset. That is still strongly skewed and deviates visibly from the Normal approximation, providing sufficient explanation for the phenomena recounted in the question. It also gives us a sense of how large a difference in means could be detected by a test: it would have to be around 3000 or more to appear significant. Conversely, the actual difference of 428 might be detected provided you had approximately $(3000/428)^2 = 50$ times as much data (in each group). Given 50 times as much data, I estimate the power to detect this difference at a significance level of 5% would be around 0.4 (which is not good, but at least you would have a chance). Here is the R code that produced these figures. #
# Generate positive random values with a median of 0, given Q3,
# and given mean. Make a proportion 1-e of them true zeros.
#
rskew <- function(n, x.mean, x.q3, e=3/8) {
beta <- qnorm(1 - (1/4)/e)
gamma <- 2*(log(x.q3) - log(x.mean/e))
sigma <- sqrt(beta^2 - gamma) + beta
mu <- log(x.mean/e) - sigma^2/2
m <- floor(n * e)
c(exp(rnorm(m, mu, sigma)), rep(0, n-m))
}
#
# See how closely the summary statistics are reproduced.
# (The quartiles will be close; the maxima not too far off;
# the means may differ a lot, though.)
#
set.seed(23)
x <- rskew(3300, 4536, 302.6)
y <- rskew(3400, 4964, 423.8)
summary(x)
summary(y)
#
# Estimate the sampling distribution of the mean.
#
set.seed(17)
sim.x <- replicate(10^4, mean(rskew(3367, 4536, 302.6)))
hist(sim.x, freq=FALSE, ylim=c(0, dnorm(0, sd=sd(sim.x))))
curve(dnorm(x, mean(sim.x), sd(sim.x)), add=TRUE, col="Red")
hist(sim.x[sim.x < 10000], xlab="x", freq=FALSE)
curve(dnorm(x, mean(sim.x), sd(sim.x)), add=TRUE, col="Red")
#
# Can a t-test detect a difference with more data?
#
set.seed(23)
n.factor <- 50
z <- replicate(10^3, {
x <- rskew(3300*n.factor, 4536, 302.6)
y <- rskew(3400*n.factor, 4964, 423.8)
t.test(x,y)$p.value
})
hist(z)
mean(z < .05) # The estimated power at a 5% significance level | {
"source": [
"https://stats.stackexchange.com/questions/69920",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30192/"
]
} |
70,490 | My question concerns trying to justify a widely-used method, namely taking the expected value of Taylor Series. Assume we have a random variable $X$ with positive mean $\mu$ and variance $\sigma^2$. Additionally, we have a function, say, $\log(x)$. Doing Taylor Expansion of $\log X$ around the mean, we get
$$
\log X = \log\mu + \frac{X - \mu}{\mu} - \frac12 \frac{(X-\mu)^2}{\mu^2} + \frac13 \frac{(X - \mu)^3}{\xi_X^3},
$$
where, as usual, $\xi_X$ is s.t. $|\xi_X - \mu| < |X - \mu|$. If we take an expectation, we will get an approximate equation which people usually refer to as something self-apparent (see the $\approx$ sign in the first equation here) :
$$
\mathbb{E}\log X \approx \log \mu - \frac12 \frac{\sigma^2}{\mu^2}
$$ QUESTION : I'm interested in how to prove that the expected value of the remainder term is actually negligible, i.e.
$$
\mathbb{E}\left[\frac{(X - \mu)^3}{\xi_X^3}\right] = o(\sigma^2)
$$
(or, in other words, $\mathbb{E}\bigl[o(X-\mu)^2\bigr] = o\bigl(\mathbb{E}\bigl[(X-\mu)^2\bigr]\bigr)$). What I tried to do : assuming that $\sigma^2 \to 0$ (which, in turn, means $X \to \mu$ in $\mathbb{P}$), I tried to split the integral into two, surrounding $\mu$ with some $\varepsilon$-vicinity $N_\varepsilon$:
$$
\int_\mathbb{R} p(x)\frac{(x-\mu)^3}{\xi_x^3} \,dx = \int_{x \in N_\varepsilon} \ldots dx + \int_{x \notin N_\varepsilon} \ldots dx
$$ The first one can be bounded due to the fact that $0 \notin N_\varepsilon$ and thus $1/\xi^3$ doesn't bother. But with the second one we have two concurring facts: on the one hand
$$
\mathbb{P}(|X - \mu| > \varepsilon) \to 0
$$
(as $\sigma^2 \to 0$). But on the other hand, we don't know what to do with $1/\xi^3$. Another possibility could be to try using the Fatou's lemma, but I can't figure out how. Will appreciate any help or hint. I realize that this is sort of a very technical question, but I need to go through it in order to trust this "Taylor-expectation" method. Thanks! P.S. I checked out here , but seems it's a bit of another stuff. | You are right to be skeptical of this approach. The Taylor series method does not work in general, although the heuristic contains a kernel of truth. To summarize the technical discussion below, Strong concentration implies that the Taylor series method works for nice functions Things can and will go dramatically wrong for heavy-tailed distributions or not-so-nice functions As Alecos's answer indicates, this suggests that the Taylor-series method should be scrapped if your data might have heavy tails. (Finance professionals, I'm looking at you.) As Elvis noted, key problem is that the variance does not control higher moments . To see why, let's simplify your question as much as possible to get to the main idea. Suppose we have a sequence of random variables $X_n$ with $\sigma(X_n)\to 0$ as $n\to \infty$. Q: Can we guarantee that $\mathbb{E}[|X_n-\mu|^3] = o(\sigma^2(X_n))$ as $n\to \infty?$ Since there are random variables with finite second moments and infinite third moments, the answer is emphatically no . Therefore, in general, the Taylor series method fails even for 3rd degree polynomials . Iterating this argument shows you cannot expect the Taylor series method to provide accurate results, even for polynomials, unless all moments of your random variable are well controlled. What, then, are we to do? Certainly the method works for bounded random variables whose support converges to a point, but this class is far too small to be interesting. Suppose instead that the sequence $X_n$ comes from some highly concentrated family that satisfies (say) $$\mathbb{P}\left\{ |X_n-\mu|> t\right\} \le \mathrm{e}^{- C n t^2} \tag{1}$$ for every $t>0$ and some $C>0$. Such random variables are surprisingly common. For example when $X_n$ is the empirical mean $$ X_n := \frac{1}{n} \sum_{i=1}^n Y_i$$ of nice random variables $Y_i$ (e.g., iid and bounded), various concentration inequalities imply that $X_n$ satisfies (1). A standard argument (see p. 10 here ) bounds the $p$th moments for such random variables: $$ \mathbb{E}[|X_n-\mu|^p] \le \left(\frac{p}{2 C n}\right)^{p/2}.$$ Therefore, for any "sufficiently nice" analytic function $f$ (see below), we can bound the error $\mathcal{E}_m$ on the $m$-term Taylor series approximation using the triangle inequality $$ \mathcal{E}_m:=\left|\mathbb{E}[f(X_n)] - \sum_{p=0}^m \frac{f^{(p)}(\mu)}{p!} \mathbb{E}(X_n-\mu)^p\right|\le \tfrac{1}{(2 C n)^{(m+1)/2}} \sum_{p=m+1}^\infty |f^{(p)}(\mu)| \frac{p^{p/2}}{p!}$$ when $n>C/2$. Since Stirling's approximation gives $p! \approx p^{p-1/2}$, the error of the truncated Taylor series satisfies $$ \mathcal{E}_m = O(n^{-(m+1)/2}) \text{ as } n\to \infty\quad \text{whenever} \quad \sum_{p=0}^\infty p^{(1-p)/2 }|f^{(p)}(\mu)| < \infty \tag{2}.$$ Hence, when $X_n$ is strongly concentrated and $f$ is sufficiently nice, the Taylor series approximation is indeed accurate. The inequality appearing in (2) implies that $f^{(p)}(\mu)/p! = O(p^{-p/2})$, so that in particular our condition requires that $f$ is entire . This makes sense because (1) does not impose any boundedness assumptions on $X_n$. Let's see what can go wrong when $f$ is has a singularity (following whuber's comment). Suppose that we choose $f(x)=1/x$. If we take $X_n$ from the $\mathrm{Normal}(1,1/n)$ distribution truncated between zero and two, then $X_n$ is sufficiently concentrated but $\mathbb{E}[f(X_n)] = \infty$ for every $n$. In other words, we have a highly concentrated, bounded random variable , and still the Taylor series method fails when the function has just one singularity. A few words on rigor. I find it nicer to present the condition appearing in (2) as derived rather than a deus ex machina that's required in a rigorous theorem/proof format. In order to make the argument completely rigorous, first note that the right-hand side in (2) implies that $$\mathbb{E}[|f(X_n)|] \le \sum_{i=0}^\infty \frac{|f^{(p)}(\mu)|}{p!} \mathbb{E}[|X_n-\mu|^p]< \infty$$ by the growth rate of subgaussian moments from above. Thus, Fubini's theorem provides $$ \mathbb{E}[f(X_n)] = \sum_{i=0}^\infty \frac{f^{(p)}(\mu)}{p!} \mathbb{E}[(X_n-\mu)^p]$$ The rest of the proof proceeds as above. | {
"source": [
"https://stats.stackexchange.com/questions/70490",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30473/"
]
} |
70,545 | I never had the opportunity to visit a stats course from a math faculty. I am looking for a probability theory and statistics book that is complete and self-sufficient. By complete I mean that it contains all the proofs and not just states results. By self-sufficient I mean that I am not required to read another book to be able to understand the book. Of course it can require college level (math student) calculus and linear algebra. I have looked at multiple books and I didn't like any of them. DeGroot & Schervish (2011) Probability and Statistics (4th Edition) Pearson This is not complete enough. It just states a lot of stuff without the derivation. Besides that I like it. Wasserman (2004) All of Statistics: A Concise Course in Statistical Inference Springer. Didn't like it at all. Almost no explanations. "Weighing the Odds" from David Williams is more formal than DeGroot and seems to be complete and self-sufficient. However, I find the style strange. He also invents new terms that only he seems to use. All the stuff that is explained in DeGroot too is explained better there. If you know a great book in German that's also fine as I am German. | If you are searching for proofs, I have been working for some time on a free stats textbook that collects lots of proofs of elementary and less elementary facts that are difficult to find in probability and statistics books (because they are scattered here and there). You can have a look at it at http://www.statlect.com/ | {
"source": [
"https://stats.stackexchange.com/questions/70545",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30495/"
]
} |
70,553 | I have a question in which it asks to verify whether if the Uniform distribution (${\rm Uniform}(a,b)$) is normalized. For one, what does it mean for any distribution to be normalized? And two, how do we go about verifying whether a distribution is normalized or not? I understand by computing
$$
\frac{X-\text{mean}}{\text{sd}}
$$
we get normalized data , but here it's asking to verify whether a distribution is normalized or not. | Unfortunately, terms are used differently in different fields, by different people within the same field, etc., so I'm not sure how well this can be answered for you here. You should make sure you know the definition that your instructor / the textbook is using for "normalized". However, here are some common definitions: Centered: $$
X-{\rm mean}
$$ Standardized: $$
\frac{X-\text{mean}}{\text{sd}}
$$ Normalized: $$
\frac{X-\min(X)}{\max(X)-\min(X)}
$$ Normalizing in this sense rescales your data to the unit interval. Standardizing turns your data into $z$-scores, as @Jeff notes. And centering just makes the mean of your data equal to $0$. It is worth recognizing here that all three of these are linear transformations ; as such, they do not change the shape of your distribution . That is, sometimes people call the $z$-score transformation "normalizing" and believe, because of $z$-scores' association with the normal distribution, that this has made their data normally distributed. This is not so (as @Jeff also notes, and as you could tell by plotting your data before and after). Should you be interested, you could change the shape of your data using the Box-Cox family of transformations , for example. With respect to how you could verify these transformations, it depends on what exactly is meant by that. If they mean simply to check that the code ran properly, you could check means, SDs, minimums, and maximums. | {
"source": [
"https://stats.stackexchange.com/questions/70553",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30499/"
]
} |
70,556 | I have scoured lots of help sites and am still confused about how to specify more complicated nested terms in a mixed model as well. I am also confused as the use of : and / and | in specifying interactions and nesting with random factors using lmer() in the lme4 package in R . For the purpose of this question, let's assume I have accurately portrayed my data with this standard statistical model:
$$
Y_{ijk} = u + \text{station}_i + \text{tow}_{j(i)} + \text{day}_k + (\text{station}\times \text{day})_{ik} + (\text{tow}\times\text{day})_{j(i)k}
$$ station is fixed, tow and day are random. Tow is (implicitly) nested within station . In other words, I'm hoping that my model includes Station(i,fixed), Tow(j,random,implicitly nested within Station), Day(k,random), and interaction between Tow and Day, and the interaction between Day and Station. I have consulted with a statistician to create my model and at this time believe it to be representative of my data, but will also add in a description of my data for those who are interested at the bottom of my post so as not to clutter. So far what I've been able to piece together is the following in lmer : lmer(y ~ station + (1|station:tow) + (1|Day) + (1|station:day) + (1|tow:day),
data=my.data) Does this accurately depict my statistical model? Any suggestions for how to improve my code if it does not read correctly? I've bolded the specific terms I'm having difficulty specifying in my lmer formula #1. tow nested within station when tow is random and station is fixed I'm confused, however about differentiating between nested and interaction terms that are random using : and / . In my above example, I have (1|station:tow) in which I'm hoping reads tow nested within station. I've read conflicting comments on various sites whether or not I should be using : or / here within the random (1|...) format of lmer . #2. The interaction between station and day when station is fixed and day is random I then have (1|station:day) but this time I'm hoping it reads the interaction between station and day. It seems like I could use station*day to account for the individual effects of station and day as well as their interaction (rather than including each of the three terms separately as I do above), but I don't see how to specify this when one is fixed and the other is random. Would station*(1|day) do that? #3. The interaction between tow and day (both random) when tow is nested in station (fixed) Then lastly, I have (1|tow:day) which I'm hoping reads the interaction of tow and day , but I'm wondering if I need to specify again that tow is nested (implicitly) in station? I am new to both R and lmer and statistical modeling and greatly appreciate the trouble of thorough explanations in any responses to my questions if possible.$$$$ More details on my data: I am asking whether concentrations of plankton vary across a physical front in the nearshore ocean. I have three stations, inshore, within, and offshore of this front. Station is thus fixed. At each station, I take three replicate plankton tows (from which I sort, count, and get a concentration in terms of # of bugs per meter cubed of water). Tow is random: in three tows I hope to account for the general variability in plankton at that particular station. Tow is intrinsically nested in station as each tow does not have a unique ID (123,123,123 is the ID for tows at each station). I then did this on multiple, independent days with a new front that had formed. I think I can think of Day as a blocking factor? Day is random as repeating this on multiple independent front days is attempting to capture variability from day to day and be representative of all days where this front is present. I want to know about the interaction terms to see if Tows change in variability from day to day and if stations always yield similar data or does it depend on the day? Again, thank you for your time and help, I appreciate it! | Tow nested within station when tow is random and station is fixed station+(1|station:tow) is correct. As @John said in his answer, (1|station/tow) would expand to (1|station)+(1|station:tow) (main effect of station plus interaction between tow and station), which you don't want because you have already specified station as a fixed effect. Interaction between station and day when station is fixed and day is random. The interaction between a fixed and a random effect is always random. Again as @John said, station*day expands to station+day+station:day , which you (again) don't want because you've already specified day in your model. I don't think there is a way to do what you want and collapse the crossed effects of day (random) and station (fixed), but you could if you wanted write station+(1|day/station) , which as specified in the previous answer would expand to station + (1|day) + (1|day:station) . interaction between tow and day when tow is nested in station Because you do not have unique values of the tow variable (i.e. because as you say below tows are specified as 1 , 2 , 3 at every station, you do need to specify the nesting, as (1|station:tow:day) . If you did have the tows specified uniquely, you could use either (1|tow:day) or (1|station:tow:day) (they should give equivalent answers). If you do not specify the nesting in this case, lme4 will try to estimate a random effect that is shared by tow #1 at all stations ... One way to diagnose whether you've specified the random effects correctly is to look at the number of observations reported for each grouping variable and see whether it agrees with what you expect (for example, the station:tow:day group should have a number of observations corresponding to the total number of station $\times$ tow $\times$ day combinations: if you forgot the nesting with station, you should see that you get fewer observations than you ought. Are http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#model-specification and http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#nested-or-crossed useful to you? | {
"source": [
"https://stats.stackexchange.com/questions/70556",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30501/"
]
} |
70,558 | What diagnostic plots (and perhaps formal tests) do you find most informative for regressions where the outcome is a count variable? I'm especially interested in Poisson and negative binomial models, as well as zero-inflated and hurdle counterparts of each. Most of the sources I've found simply plot the residuals vs. fitted values without discussion of what these plots "should" look like. Wisdom and references greatly appreciated. The back story on why I'm asking this, if it's relevant, is my other question . Related discussions: Interpreting residual diagnostic plots for glm models? Assumptions of generalized linear models GLMs - Diagnostics and Which Family | Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from MASS ): Test and graph the original count data by plotting observed frequencies and fitted frequencies (see chapter 2 in Friendly ) which is supported by the vcd package in R in large parts. For example, with goodfit and a rootogram : library(MASS)
library(vcd)
data(quine)
fit <- goodfit(quine$Days)
summary(fit)
rootogram(fit) or with Ord plots which help in identifying which count data model is underlying (e.g., here the slope is positive and the intercept is positive which speaks for a negative binomial distribution): Ord_plot(quine$Days) or with the "XXXXXXness" plots where XXXXX is the distribution of choice, say Poissoness plot (which speaks against Poisson, try also type="nbinom" ): distplot(quine$Days, type="poisson") Inspect usual goodness-of-fit measures (such as likelihood ratio statistics vs. a null model or similar): mod1 <- glm(Days~Age+Sex, data=quine, family="poisson")
summary(mod1)
anova(mod1, test="Chisq") Check for over / underdispersion by looking at residual deviance/df or at a formal test statistic (e.g., see this answer ). Here we have clearly overdispersion: library(AER)
deviance(mod1)/mod1$df.residual
dispersiontest(mod1) Check for influential and leverage points , e.g., with the influencePlot in the car package. Of course here many points are highly influential because Poisson is a bad model: library(car)
influencePlot(mod1) Check for zero inflation by fitting a count data model and its zeroinflated / hurdle counterpart and compare them (usually with AIC). Here a zero inflated model would fit better than the simple Poisson (again probably due to overdispersion): library(pscl)
mod2 <- zeroinfl(Days~Age+Sex, data=quine, dist="poisson")
AIC(mod1, mod2) Plot the residuals (raw, deviance or scaled) on the y-axis vs. the (log) predicted values (or the linear predictor) on the x-axis. Here we see some very large residuals and a substantial deviance of the deviance residuals from the normal (speaking against the Poisson; Edit: @FlorianHartig's answer suggests that normality of these residuals is not to be expected so this is not a conclusive clue): res <- residuals(mod1, type="deviance")
plot(log(predict(mod1)), res)
abline(h=0, lty=2)
qqnorm(res)
qqline(res) If interested, plot a half normal probability plot of residuals by plotting ordered absolute residuals vs. expected normal values Atkinson (1981) . A special feature would be to simulate a reference ‘line’ and envelope with simulated / bootstrapped confidence intervals (not shown though): library(faraway)
halfnorm(residuals(mod1)) Diagnostic plots for log linear models for count data (see chapters 7.2 and 7.7 in Friendly's book). Plot predicted vs. observed values perhaps with some interval estimate (I did just for the age groups--here we see again that we are pretty far off with our estimates due to the overdispersion apart, perhaps, in group F3. The pink points are the point prediction $\pm$ one standard error): plot(Days~Age, data=quine)
prs <- predict(mod1, type="response", se.fit=TRUE)
pris <- data.frame("pest"=prs[[1]], "lwr"=prs[[1]]-prs[[2]], "upr"=prs[[1]]+prs[[2]])
points(pris$pest ~ quine$Age, col="red")
points(pris$lwr ~ quine$Age, col="pink", pch=19)
points(pris$upr ~ quine$Age, col="pink", pch=19) This should give you much of the useful information about your analysis and most steps work for all standard count data distributions (e.g., Poisson, Negative Binomial, COM Poisson, Power Laws). | {
"source": [
"https://stats.stackexchange.com/questions/70558",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11511/"
]
} |
70,679 | I'm trying to interpret variance inflation factors using the vif function in the R package car . The function prints both a generalised $\text{VIF}$ and also $\text{GVIF}^{1/(2\cdot\text{df})}$. According to the help file , this latter value To adjust for the dimension of the confidence ellipsoid, the function
also prints GVIF^[1/(2*df)] where df is the degrees of freedom
associated with the term. I don't understand the meaning of this explanation in the help file, so I'm not sure if I should be using $\text{GVIF}$ or $\text{GVIF}^{1/(2\cdot\text{df})}$. For my model these two values are very different (maximum $\text{GVIF}$ is ~$60$; maximum $\text{GVIF}^{1/(2\cdot\text{df})}$ is ~$3$). Could someone please explain to me which one I should be using, and what is meant by adjusting the dimension of the confidence ellipsoid? | Georges Monette and I introduced the GVIF in the paper "Generalized collinearity diagnostics," JASA 87:178-183, 1992 ( link ). As we explained, the GVIF represents the squared ratio of hypervolumes of the joint-confidence ellipsoid for a subset of coefficients to the "utopian" ellipsoid that would be obtained if the regressors in this subset were uncorrelated with regressors in the complementary subset. In the case of a single coefficient, this specializes to the usual VIF. To make GVIFs comparable across dimensions, we suggested using GVIF^(1/(2*Df)), where Df is the number of coefficients in the subset. In effect, this reduces the GVIF to a linear measure, and for the VIF, where Df = 1, is proportional to the inflation due to collinearity in the confidence interval for the coefficient. | {
"source": [
"https://stats.stackexchange.com/questions/70679",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/15949/"
]
} |
70,801 | I am lost in normalizing, could anyone guide me please. I have a minimum and maximum values, say -23.89 and 7.54990767, respectively. If I get a value of 5.6878 how can I scale this value on a scale of 0 to 1. | If you want to normalize your data, you can do so as you suggest and simply calculate the following: $$z_i=\frac{x_i-\min(x)}{\max(x)-\min(x)}$$ where $x=(x_1,...,x_n)$ and $z_i$ is now your $i^{th}$ normalized data. As a proof of concept (although you did not ask for it) here is some R code and accompanying graph to illustrate this point: # Example Data
x = sample(-100:100, 50)
#Normalized Data
normalized = (x-min(x))/(max(x)-min(x))
# Histogram of example data and normalized data
par(mfrow=c(1,2))
hist(x, breaks=10, xlab="Data", col="lightblue", main="")
hist(normalized, breaks=10, xlab="Normalized Data", col="lightblue", main="") | {
"source": [
"https://stats.stackexchange.com/questions/70801",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5931/"
]
} |
70,848 | I have come across the term "closed-form solution" quite often. What does a closed-form solution mean? How does one determine if a close-form solution exists for a given problem? Searching online, I found some information, but nothing in the context of developing a statistical or probabilistic model / solution. I understand regression very well, so if any one can explain the concept with reference to regression or model-fitting, it will be easy to consume. :) | "An equation is said to be a closed-form solution if it solves a given
problem in terms of functions and mathematical operations from a given
generally accepted set. For example, an infinite sum would generally
not be considered closed-form. However, the choice of what to call
closed-form and what not is rather arbitrary since a new "closed-form"
function could simply be defined in terms of the infinite sum."
--Wolfram Alpha and "In mathematics, an expression is said to be a closed-form expression
if it can be expressed analytically in terms of a finite number of
certain "well-known" functions. Typically, these well-known functions
are defined to be elementary functions—constants, one variable x,
elementary operations of arithmetic (+ − × ÷), nth roots, exponent and
logarithm (which thus also include trigonometric functions and inverse
trigonometric functions). Often problems are said to be tractable if
they can be solved in terms of a closed-form expression." -- Wikipedia An example of a closed form solution in linear regression would be the least square equation $$\hat\beta=(X^TX)^{-1}X^Ty$$ | {
"source": [
"https://stats.stackexchange.com/questions/70848",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30630/"
]
} |
70,855 | How can I sample from a mixture distribution, and in particular a mixture of Normal distributions in R ? For example, if I wanted to sample from: $$
0.3\!\times\mathcal{N}(0,1)\; + \;0.5\!\times\mathcal{N}(10,1)\; + \;0.2\!\times\mathcal{N}(3,.1)
$$ how could I do that? | In general, one of the easiest ways to sample from a mixture distribution is the following: Algorithm Steps 1) Generate a random variable $U\sim\text{Uniform}(0,1)$ 2) If $U\in\left[\sum_{i=1}^kp_{k},\sum_{i=1}^{k+1}p_{k+1}\right)$ interval, where $p_{k}$ correspond to the the probability of the $k^{th}$ component of the mixture model, then generate from thedistribution of the $k^{th}$ component 3) Repeat steps 1) and 2) until you have the desired amount of samples from the mixture distribution Now using the general algorithm given above, you could sample from your example mixture of normals by using the following R code: #The number of samples from the mixture distribution
N = 100000
#Sample N random uniforms U
U =runif(N)
#Variable to store the samples from the mixture distribution
rand.samples = rep(NA,N)
#Sampling from the mixture
for(i in 1:N){
if(U[i]<.3){
rand.samples[i] = rnorm(1,0,1)
}else if(U[i]<.8){
rand.samples[i] = rnorm(1,10,1)
}else{
rand.samples[i] = rnorm(1,3,.1)
}
}
#Density plot of the random samples
plot(density(rand.samples),main="Density Estimate of the Mixture Model")
#Plotting the true density as a sanity check
x = seq(-20,20,.1)
truth = .3*dnorm(x,0,1) + .5*dnorm(x,10,1) + .2*dnorm(x,3,.1)
plot(density(rand.samples),main="Density Estimate of the Mixture Model",ylim=c(0,.2),lwd=2)
lines(x,truth,col="red",lwd=2)
legend("topleft",c("True Density","Estimated Density"),col=c("red","black"),lwd=2) Which generates: and as a sanity check: | {
"source": [
"https://stats.stackexchange.com/questions/70855",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
71,033 | I understand what a Posterior is, but I'm not sure what the latter means? How are the 2 different? Kevin P Murphy indicated in his textbook, Machine Learning: a Probabilistic Perspective , that it is "an internal belief state". What does that really mean? I was under the impression that a Prior represents your internal belief or bias, where am I going wrong? | The simple difference between the two is that the posterior distribution depends on the unknown parameter $\theta$ , i.e., the posterior distribution is: $$p(\theta|x)=c\times p(x|\theta)p(\theta)$$ where $c$ is the normalizing constant. While on the other hand, the posterior predictive distribution does not depend on the unknown parameter $\theta$ because it has been integrated out, i.e., the posterior predictive distribution is: $$p(x^*|x)=\int_\Theta c\times p(x^*,\theta|x)d\theta=\int_\Theta c\times p(x^*|\theta)p(\theta|x)d\theta$$ where $x^*$ is a new unobserved random variable and is independent of $x$ . I won't dwell on the posterior distribution explanation since you say you understand it but the posterior distribution "is the distribution of an unknown quantity, treated as a random variable, conditional on the evidence obtained" (Wikipedia). So basically its the distribution that explains your unknown, random, parameter. On the other hand, the posterior predictive distribution has a completely different meaning in that it is the distribution for future predicted data based on the data you have already seen. So the posterior predictive distribution is basically used to predict new data values. If it helps, is an example graph of a posterior distribution and a posterior predictive distribution: | {
"source": [
"https://stats.stackexchange.com/questions/71033",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/22545/"
]
} |
71,184 | What is the most appropriate sampling method to evaluate the performance of a classifier on a particular data set and compare it with other classifiers? Cross-validation seems to be standard practice, but I've read that methods such as .632 bootstrap are a better choice. As a follow-up: Does the choice of performance metric affect the answer (if I use AUC instead of accuracy)? My ultimate goal is to be able to say with some confidence that one machine learning method is superior to another for a particular dataset. | One important difference in the usual way cross validation and out-of-bootstrap methods are applied is that most people apply cross validation only once (i.e. each case is tested exactly once), while out-of-bootstrap validation is performed with a large number of repetitions/iterations.
In that situation, cross validation is subject to higher variance due to model instability. However, that can be avoided by using e.g. iterated/repeated $k$-fold cross validation. If that is done, at least for the spectroscopic data sets I've been working with, the total error of both resampling schemes seems to be the same in practice. Leave-one-out cross validation is discouraged, as there is no possibility to reduce the model instability-type variance and there are some classifiers and problems where it exhibits a huge pessimistic bias. .632 bootstrap does a reasonable job as long as the resampling error which is mixed in is not too optimistically biased. (E.g. for the data I work with, very wide matrices with lots of variates, it doesn't work very well as the models are prone to serious overfitting). This means also that I'd avoid using .632 bootstrap for comparing models of varying complexity. With .632+ bootstrap I don't have experience: if overfitting happens and is properly detected, it will equal the original out-of-bootstrap estimate, so I stick with plain oob or iterated/repeated cross validation for my data. Literature: Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection Artificial Intelligence Proceedings 14th International Joint Conference, 20 -- 25. August 1995, Montréal, Québec, Canada, 1995, 1137 - 1145. (a classic) Dougherty and Braga-Neto have a number of publications on the topic , e.g. Dougherty, E. R. et al. : Performance of Error Estimators for Classification Current Bioinformatics, 2010, 5, 53-67 Beleites, C. et al. : Variance reduction in estimating classification error using sparse datasets Chemom Intell Lab Syst, 2005, 79, 91 - 100. We have a comparison of doing cross validation only once or iterating/repeating, and compare that with out-of-bootstrap and .632 bootstrap as well for particularly wide data with multi-collinearities. Kim, J.-H.: Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap, Computational Statistics & Data Analysis , 2009, 53, 3735 - 374 Also finds that repeated/iterated $k$-fold cross validation and out-of-bootstrap have similar performance (as opposed to doing the cross validation only once). Choice of metric: accuray (of which @FrankHarrell will tell you that it is a bad choice as it is not a proper scoring rule ) is subject to high variance because it counts each case either as completely correct or completely incorrect, even if the classifier predicted e.g. only 60 % posterior probability for the test case to belong to the class in question. A proper scoring rule is e.g. Brier's score, which is closely related to mean squared error in regression. Mean square error analoga are available for proportions like accuracy, sensitivity, specificity, predictive values: Beleites, C. et al. : Validation of soft classification models using partial class memberships: An extended concept of sensitivity & Co. applied to grading of astrocytoma tissues, Chemom Intell Lab Syst, 2013, 122, 12 - 22; DOI: 10.1016/j.chemolab.2012.12.003 (summary page giving link to preprint as well) My ultimate goal is to be able to say with some confidence that one machine learning method is superior to another for a particular dataset. Use a paired test to evaluate that. For comparing proportions, have a look at McNemar's test. The answer to this will be affected by the choice of metric. As regression-type error measures do not have the "hardening" step of cutting decisions with a threshold, they often have less variance than their classification counterparts. Metrics like accuracy that are basically proportions will need huge numbers of test cases to establish the superiority of one classifier over another. Fleiss: "Statistical methods for rates and proportions" gives examples (and tables) for unpaired comparison of proportions. To give you an impression of what I mean with "huge sample sizes", have a look at the image in my answer to this other question .
Paired tests like McNemar's need less test cases, but IIRC still in the best case half (?) of the sample size needed for the unpaired test. To characterize a classifier's performance (hardened), you usually need a working curve of least two values such as the ROC (sensitivity vs. specificity) or the like. I seldom use overall accuracy or AUC, as my applications usually have restrictions e.g. that sensitivity is more important than specificity, or certain bounds on these measures should be met. If you go for "single number" sum characteristics, make sure that the working point of the models you're looking at is actually in a sensible range. For accuracy and other performance measures that summarize the performance for several classes according to the reference labels, make sure that you take into account the relative frequency of the classes that you'll encounter in the application - which is not necessarily the same as in your training or test data. Provost, F. et al. : The Case Against Accuracy Estimation for Comparing Induction Algorithms In Proceedings of the Fifteenth International Conference on Machine Learning, 1998 edit: comparing multiple classifiers I've been thinking about this problem for a while, but did not yet arrive at a solution (nor did I meet anyone who had a solution). Here's what I've got so far: The problem is that you run very swiftly into into massive multiple comparison situation. However, you may say that for the applications I have at hand, multiple comparisons is not really making things any worse, because I rarely have enought test cases to allow even a single comparison... I think tuning of model hyperparameters is a specialized version of the general model comparison problem, which may be easier to tackle for a beginning. However, there are rumours that the quality of models depends much on the expertise of the one who builds them, possibly even more so than on the choice of model type For the moment, I decided that "optimization is the root of all evil", and take a very different approach instead: I decide as much as possible by expert knowledge about the problem at hand. That actually allows to narrow down things quite a bit, so that I can often avoid model comparison. When I have to compare models, I try to be very open and clear reminding people about the uncertainty of the performance estimate and that particularly multiple model comparison is AFAIK still an unsolved problem. Edit 2: paired tests Among $n$ models, you can make $\frac{1}{2} (n^2 - n)$ comparisons between two different models (which is a massive multiple comparison situation), I don't know how to properly do this. However, the paired of the test just refers to the fact that as all models are tested with exactly the same test cases, you can split the cases into "easy" and "difficult" cases on the one hand, for which all models arrive at a correct (or wrong) prediction. They do not help distinguishing among the models. On the other hand, there are the "interesting" cases which are predicted correctly by some, but not by other models. Only these "interesting" cases need to be considered for judging superiority, neither the "easy" nor the "difficult" cases help with that. (This is how I understand the idea behind McNemar's test). For the massively multiple comparison between $n$ models, I guess one problem is that unless you're very lucky, the more models you compare the fewer cases you will be able to exclude from the further considerations: even if all models are truly equal in their overall performance, it becomes less and less likely that a case ends up being always predicted correctly (or always wrongly) by $n$ models. | {
"source": [
"https://stats.stackexchange.com/questions/71184",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30763/"
]
} |
71,260 | Suppose that $\mathbf{X} \sim N_{2}(\mathbf{\mu}, \mathbf{\Sigma})$. Then the conditional distribution of $X_1$ given that $X_2 = x_2$ is multivariate normally distributed with mean: $$ E[P(X_1 | X_2 = x_2)] = \mu_1+\frac{\sigma_{12}}{\sigma_{22}}(x_2-\mu_2)$$ and variance: $${\rm Var}[P(X_1 | X_2 = x_2)] = \sigma_{11}-\frac{\sigma_{12}^{2}}{\sigma_{22}}$$ It makes sense that the variance would decrease since we have more information. But what is the intuition behind the mean formula? How does the covariance between $X_1$ and $X_2$ factor into the conditional mean? | Synopsis Every statement in the question can be understood as a property of ellipses. The only property particular to the bivariate Normal distribution that is needed is the fact that in a standard bivariate Normal distribution of $X,Y$ --for which $X$ and $Y$ are uncorrelated--the conditional variance of $Y$ does not depend on $X$ . (This in turn is an immediate consequence of the fact that lack of correlation implies independence for jointly Normal variables.) The following analysis shows precisely what property of ellipses is involved and derives all the equations of the question using elementary ideas and the simplest possible arithmetic, in a way intended to be easily remembered. Circularly symmetric distributions The distribution of the question is a member of the family of bivariate Normal distributions. They are all derived from a basic member, the standard bivariate Normal, which describes two uncorrelated standard Normal distributions (forming its two coordinates). The left side is a relief plot of the standard bivariate normal density. The right side shows the same in pseudo-3D, with the front part sliced away. This is an example of a circularly symmetric distribution: the density varies with distance from a central point but not with the direction away from that point. Thus, the contours of its graph (at the right) are circles. Most other bivariate Normal distributions are not circularly symmetric, however: their cross-sections are ellipses. These ellipses model the characteristic shape of many bivariate point clouds. These are portraits of the bivariate Normal distribution with covariance matrix $\Sigma = \left(\begin{array}{cc} 1 & -\frac{2}{3} \\ -\frac{2}{3} & 1 \\\end{array}\right).$ It is a model for data with correlation coefficient $-2/3$ . How to Create Ellipses An ellipse--according to its oldest definition--is a conic section, which is a circle distorted by a projection onto another plane. By considering the nature of projection, just as visual artists do, we may decompose it into a sequence of distortions that are easy to understand and calculate with. First, stretch (or, if necessary, squeeze) the circle along what will become the long axis of the ellipse until it is the correct length: Next, squeeze (or stretch) this ellipse along its minor axis: Third, rotate it around its center into its final orientation: Finally, shift it to the desired location: These are all affine transformations. (In fact, the first three are linear transformations ; the final shift makes it affine.) Because a composition of affine transformations is (by definition) still affine, the net distortion from the circle to the final ellipse is an affine transformation. But it can be somewhat complicated: Notice what happened to the ellipse's (natural) axes: after they were created by the shift and squeeze, they (of course) rotated and shifted along with the axis itself. We easily see these axes even when they are not drawn, because they are axes of symmetry of the ellipse itself. We would like to apply our understanding of ellipses to understanding distorted circularly symmetric distributions, like the bivariate Normal family. Unfortunately, there is a problem with these distortions : they do not respect the distinction between the $x$ and $y$ axes. The rotation at step 3 ruins that. Look at the faint coordinate grids in the backgrounds: these show what happens to a grid (of mesh $1/2$ in both directions) when it is distorted. In the first image the spacing between the original vertical lines (shown solid) is doubled. In the second image the spacing between the original horizontal lines (shown dashed) is shrunk by a third. In the third image the grid spacings are not changed, but all the lines are rotated. They shift up and to the right in the fourth image. The final image, showing the net result, displays this stretched, squeezed, rotated, shifted grid. The original solid lines of constant $x$ coordinate no longer are vertical. The key idea --one might venture to say it is the crux of regression--is that there is a way in which the circle can be distorted into an ellipse without rotating the vertical lines . Because the rotation was the culprit, let's cut to the chase and show how to created a rotated ellipse without actually appearing to rotate anything ! This is a skew transformation. It does two things at once: It squeezes in the $y$ direction (by an amount $\lambda$ , say). This leaves the $x$ -axis alone. It lifts any resulting point $(x,\lambda y)$ by an amount directly proportional to $x$ . Writing that constant of proportionality as $\rho$ , this sends $(x,\lambda y)$ to $(x, \lambda y+\rho x)$ . The second step lifts the $x$ -axis into the line $y=\rho x$ , shown in the previous figure. As shown in that figure, I want to work with a special skew transformation, one that effectively rotates the ellipse by 45 degrees and inscribes it into the unit square.
The major axis of this ellipse is the line $y=x$ . It is visually evident that $|\rho| \le 1$ . (Negative values of $\rho$ tilt the ellipse down to the right rather than up to the right.) This is the geometric explanation of "regression to the mean." Choosing an angle of 45 degrees makes the ellipse symmetric around the square's diagonal (part of the line $y=x$ ). To figure out the parameters of this skew transformation, observe: The lifting by $\rho x$ moves the point $(1,0)$ to $(1,\rho)$ . The symmetry around the main diagonal then implies the point $(\rho, 1)$ also lies on the ellipse. Where did this point start out? The original (upper) point on the unit circle (having implicit equation $x^2+y^2=1$ ) with $x$ coordinate $\rho$ was $(\rho, \sqrt{1-\rho^2})$ . Any point of the form $(\rho, y)$ first got squeezed to $(\rho, \lambda y)$ and then lifted to $(\rho, \lambda y + \rho\times\rho)$ . The unique solution to the equation $(\rho, \lambda \sqrt{1-\rho^2} + \rho^2) = (\rho, 1)$ is $\lambda = \sqrt{1-\rho^2}$ . That is the amount by which all distances in the vertical direction must be squeezed in order to create an ellipse at a 45 degree angle when it is skewed vertically by $\rho$ . To firm up these ideas, here is a tableau showing how a circularly symmetric distribution is distorted into distributions with elliptical contours by means of these skew transformations. The panels show values of $\rho$ equal to $0,$ $3/10,$ $6/10,$ and $9/10,$ from left to right. The leftmost figure shows a set of starting points around one of the circular contours as well as part of the horizontal axis. Subsequent figures use arrows to show how those points are moved. The image of the horizontal axis appears as a slanted line segment (with slope $\rho$ ). (The colors represent different amounts of density in the different figures.) Application We are ready to do regression. A standard, elegant (yet simple) method to perform regression is first to express the original variables in new units of measurement: we center them at their means and use their standard deviations as the units. This moves the center of the distribution to the origin and makes all its elliptical contours slant 45 degrees (up or down). When these standardized data form a circular point cloud, the regression is easy: the means conditional on $x$ are all $0$ , forming a line passing through the origin. (Circular symmetry implies symmetry with respect to the $x$ axis, showing that all conditional distributions are symmetric, whence they have $0$ means.) As we have seen, we may view the standardized distribution as arising from this basic simple situation in two steps: first, all the (standardized) $y$ values are multiplied by $\sqrt{1-\rho^2}$ for some value of $\rho$ ; next, all values with $x$ -coordinates are vertically skewed by $\rho x$ . What did these distortions do to the regression line (which plots the conditional means against $x$ )? The shrinking of $y$ coordinates multiplied all vertical deviations by a constant. This merely changed the vertical scale and left all conditional means unaltered at $0$ . The vertical skew transformation added $\rho x$ to all conditional values at $x$ , thereby adding $\rho x$ to their conditional mean: the curve $y=\rho x$ is the regression curve, which turns out to be a line. Similarly, we may verify that because the $x$ -axis is the least squares fit to the circularly symmetric distribution, the least squares fit to the transformed distribution also is the line $y=\rho x$ : the least-squares line coincides with the regression line. These beautiful results are a consequence of the fact that the vertical skew transformation does not change any $x$ coordinates. We can easily say more: The first bullet (about shrinking) shows that when $(X,Y)$ has any circularly symmetric distribution, the conditional variance of $Y|X$ was multiplied by $\left(\sqrt{1-\rho^2}\right)^2 = 1 - \rho^2$ . More generally: the vertical skew transformation rescales each conditional distribution by $\sqrt{1-\rho^2}$ and then recenters it by $\rho x$ . For the standard bivariate Normal distribution, the conditional variance is a constant (equal to $1$ ), independent of $x$ . We immediately conclude that after applying this skew transformation, the conditional variance of the vertical deviations is still a constant and equals $1-\rho^2$ . Because the conditional distributions of a bivariate Normal are themselves Normal, now that we know their means and variances, we have full information about them. Finally, we need to relate $\rho$ to the original covariance matrix $\Sigma$ . For this, recall that the (nicest) definition of the correlation coefficient between two standardized variables $X$ and $Y$ is the expectation of their product $XY$ . (The correlation of $X$ and $Y$ is simply declared to be the correlation of their standardized versions.) Therefore, when $(X,Y)$ follows any circularly symmetric distribution and we apply the skew transformation to the variables, we may write $$\varepsilon = Y - \rho X$$ for the vertical deviations from the regression line and notice that $\varepsilon$ must have a symmetric distribution around $0$ . Why? Because before the skew transformation was applied, $Y$ had a symmetric distribution around $0$ and then we (a) squeezed it and (b) lifted it by $\rho X$ . The former did not change its symmetry while the latter recentered it at $\rho X$ , QED. The next figure illustrates this. The black lines trace out heights proportional to the conditional densities at various regularly-spaced values of $x$ . The thick white line is the regression line, which passes through the center of symmetry of each conditional curve. This plot shows the case $\rho = -1/2$ in standardized coordinates. Consequently $$\mathbb{E}(XY) = \mathbb{E}\left(X(\rho X + \varepsilon)\right) = \rho\mathbb{E}(X^2) + \mathbb{E}(X\varepsilon) = \rho(1) + 0=\rho.$$ The final equality is due to two facts: (1) because $X$ has been standardized, the expectation of its square is its standardized variance, equal to $1$ by construction; and (2) the expectation of $X\varepsilon$ equals the expectation of $X(-\varepsilon)$ by virtue of the symmetry of $\varepsilon$ . Because the latter is the negative of the former, both must equal $0$ : this term drops out. We have identified the parameter of the skew transformation, $\rho$ , as being the correlation coefficient of $X$ and $Y$ . Conclusions By observing that any ellipse may be produced by distorting a circle with a vertical skew transformation that preserves the $x$ coordinate, we have arrived at an understanding of the contours of any distribution of random variables $(X,Y)$ that is obtained from a circularly symmetric one by means of stretches, squeezes, rotations, and shifts (that is, any affine transformation). By re-expressing the results in terms of the original units of $x$ and $y$ --which amount to adding back their means, $\mu_x$ and $\mu_y$ , after multiplying by their standard deviations $\sigma_x$ and $\sigma_y$ --we find that: The least-squares line and the regression curve both pass through the origin of the standardized variables, which corresponds to the "point of averages" $(\mu_x,\mu_y)$ in original coordinates. The regression curve, which is defined to be the locus of conditional means, $\{(x, \rho x)\},$ coincides with the least-squares line. The slope of the regression line in standardized coordinates is the correlation coefficient $\rho$ ; in the original units it therefore equals $\sigma_y \rho / \sigma_x$ . Consequently the equation of the regression line is $$y = \frac{\sigma_y\rho}{\sigma_x}\left(x - \mu_x\right) + \mu_y.$$ The conditional variance of $Y|X$ is $\sigma_y^2(1-\rho^2)$ times the conditional variance of $Y'|X'$ where $(X',Y')$ has a standard distribution (circularly symmetric with unit variances in both coordinates), $X'=(X-\mu_X)/\sigma_x$ , and $Y'=(Y-\mu_Y)/\sigma_Y$ . None of these results is a particular property of bivariate Normal distributions! For the bivariate Normal family, the conditional variance of $Y'|X'$ is constant (and equal to $1$ ): this fact makes that family particularly simple to work with. In particular: Because in the covariance matrix $\Sigma$ the coefficients are $\sigma_{11}=\sigma_x^2,$ $\sigma_{12}=\sigma_{21}=\rho\sigma_x\sigma_y,$ and $\sigma_{22}=\sigma_y^2,$ the conditional variance of $Y|X$ for a bivariate Normal distribution is $$\sigma_y^2(1-\rho^2)=\sigma_{22}\left(1-\left(\frac{\sigma_{12}}{\sqrt{\sigma_{11}\sigma_{22}}}\right)^2\right)=\sigma_{22} - \frac{\sigma_{12}^2}{\sigma_{11}}.$$ Technical Notes The key idea can be stated in terms of matrices describing the linear transformations. It comes down to finding a suitable "square root" of the correlation matrix for which $y$ is an eigenvector. Thus: $$\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \\\end{array}\right) = \mathbb{A}\mathbb{A}'$$ where $$\mathbb{A} = \left(\begin{array}{cc} 1 & 0 \\ \rho & \sqrt{1-\rho^2} \\\end{array}\right).$$ A much better known square root is the one initially described (involving a rotation instead of a skew transformation); it is the one produced by a singular value decomposition and it plays a prominent role in principal components analysis (PCA): $$\left(\begin{array}{cc} 1 & \rho \\ \rho & 1 \\\end{array}\right) = \mathbb{B}\mathbb{B}';$$ $$\mathbb{B} = \mathbb{Q} \left(
\begin{array}{cc}
\sqrt{\rho +1} & 0 \\
0 & \sqrt{1-\rho } \\
\end{array}
\right)\mathbb{Q}'$$ where $\mathbb{Q} = \left(
\begin{array}{cc}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\
\end{array}
\right)$ is the rotation matrix for a $45$ degree rotation. Thus, the distinction between PCA and regression comes down to the difference between two special square roots of the correlation matrix. | {
"source": [
"https://stats.stackexchange.com/questions/71260",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30792/"
]
} |
71,489 | Can anybody explain differences and give specific examples how to use these three analyses? LDA - Linear Discriminant Analysis FDA - Fisher's Discriminant Analysis QDA - Quadratic Discriminant Analysis I searched everywhere, but couldn't find real examples with real values to see how these analyses are used and data calculated, only lots of formulas which are hard to understand without any real examples. As I tried to understand it was hard to distinguish which equations/formulas belonged to LDA and which to FDA. For example let's say there is such data: x1 x2 class
1 2 a
1 3 a
2 3 a
3 3 a
1 0 b
2 1 b
2 2 b And let's say some testing data: x1 x2
2 4
3 5
3 6 So how to use such data with all these three approaches? It would be best to see how to calculate everything by hand, not using some math package which calculates everything behind the scenes. P.S. I only found this tutorial: http://people.revoledu.com/kardi/tutorial/LDA/LDA.html#LDA .
It shows how to use LDA. | "Fisher's Discriminant Analysis" is simply LDA in a situation of 2 classes. When there is only 2 classes computations by hand are feasible and the analysis is directly related to Multiple Regression. LDA is the direct extension of Fisher's idea on situation of any number of classes and uses matrix algebra devices (such as eigendecomposition) to compute it. So, the term "Fisher's Discriminant Analysis" can be seen as obsolete today. "Linear Discriminant analysis" should be used instead. See also . Discriminant analysis with 2+ classes (multi-class) is canonical by its algorithm (extracts dicriminants as canonical variates); rare term "Canonical Discriminant Analysis" usually stands simply for (multiclass) LDA therefore (or for LDA + QDA, omnibusly). Fisher used what was then called "Fisher classification functions" to classify objects after the discriminant function has been computed. Nowadays, a more general Bayes' approach is used within LDA procedure to classify objects. To your request for explanations of LDA I may send you to these my answers: extraction in LDA , classification in LDA , LDA among related procedures . Also this , this , this questions and answers. Just like ANOVA requires an assumption of equal variances, LDA requires an assumption of equal variance-covariance matrices (between the input variables) of the classes. This assumption is important for classification stage of the analysis. If the matrices substantially differ, observations will tend to be assigned to the class where variability is greater. To overcome the problem, QDA was invented. QDA is a modification of LDA which allows for the above heterogeneity of classes' covariance matrices. If you have the heterogeneity (as detected for example by Box's M test) and you don't have QDA at hand, you may still use LDA in the regime of using individual covariance matrices (rather than the pooled matrix) of the discriminants at classification. This partly solves the problem, though less effectively than in QDA, because - as just pointed - these are the matrices between the discriminants and not between the original variables (which matrices differed). Let me leave analyzing your example data for yourself. Reply to @zyxue's answer and comments LDA is what you defined FDA is in your answer. LDA first extracts linear constructs (called discriminants) that maximize the between to within separation, and then uses those to perform (gaussian) classification. If (as you say) LDA were not tied with the task to extract the discriminants LDA would appear to be just a gaussian classifier, no name "LDA" would be needed at all. It is that classification stage where LDA assumes both normality and variance-covariance homogeneity of classes. The extraction or "dimensionality reduction" stage of LDA assumes linearity and variance-covariance homogeneity , the two assumptions together make "linear separability" feasible. (We use single pooled $S_w$ matrix to produce discriminants which therefore have identity pooled within-class covariance matrix, that give us the right to apply the same set of discriminants to classify to all the classes. If all $S_w$ s are same the said within-class covariances are all same, identity; that right to use them becomes absolute.) Gaussian classifier (the second stage of LDA) uses Bayes rule to assign observations to classes by the discriminants. The same result can be accomplished via so called Fisher linear classification functions which utilizes original features directly. However, Bayes' approach based on discriminants is a little bit general in that it will allow to use separate class discriminant covariance matrices too, in addition to the default way to use one, the pooled one. Also, it will allow to base classification on a subset of discriminants. When there are only two classes, both stages of LDA can be described together in a single pass because "latents extraction" and "observations classification" reduce then to the same task. | {
"source": [
"https://stats.stackexchange.com/questions/71489",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30876/"
]
} |
71,519 | I’ve noticed that in R, Poisson and negative binomial (NB) regressions always seem to fit the same coefficients for categorical, but not continuous, predictors. For example, here's a regression with a categorical predictor: data(warpbreaks)
library(MASS)
rs1 = glm(breaks ~ tension, data=warpbreaks, family="poisson")
rs2 = glm.nb(breaks ~ tension, data=warpbreaks)
#compare coefficients
cbind("Poisson"=coef(rs1), "NB"=coef(rs2)) Here is an example with a continuous predictor, where the Poisson and NB fit different coefficients: data(cars)
rs1 = glm(dist ~ speed, data=cars, family="poisson")
rs2 = glm.nb(dist ~ speed, data=cars)
#compare coefficients
cbind("Poisson"=coef(rs1), "NB"=coef(rs2)) (Of course these aren't count data, and the models aren't meaningful...) Then I recode the predictor into a factor, and the two models again fit the same coefficients: library(Hmisc)
speedCat = cut2(cars$speed, g=5)
#you can change g to get a different number of bins
rs1 = glm(cars$dist ~ speedCat, family="poisson")
rs2 = glm.nb(cars$dist ~ speedCat)
#compare coefficients
cbind("Poisson"=coef(rs1), "NB"=coef(rs2)) However, Joseph Hilbe’s Negative Binomial Regression gives an example (6.3, pg 118-119) where a categorical predictor, sex, is fit with slightly different coefficients by the Poisson ($b=0.883$) and NB ($b=0.881$). He says: “The incidence rate ratios between the Poisson and NB models are quite similar. This is not surprising given the proximity of $\alpha$ [corresponding to $1/\theta$ in R] to zero.” I understand this, but in the above examples, summary(rs2) tells us that $\theta$ is estimated at 9.16 and 7.93 respectively. So why are the coefficients exactly the same? And why only for categorical predictors? Edit #1 Here is an example with two non-orthogonal predictors. Indeed, the coefficients are no longer the same: data(cars)
#make random categorical predictor
set.seed(1); randomCats1 = sample( c("A","B","C"), length(cars$dist), replace=T)
set.seed(2); randomCats2 = sample( c("C","D","E"), length(cars$dist), replace=T)
rs1 = glm(dist ~ randomCats1 + randomCats2, data=cars, family="poisson")
rs2 = glm.nb(dist ~ randomCats1 + randomCats2, data=cars)
#compare coefficients
cbind("Poisson"=coef(rs1), "NB"=coef(rs2)) And, including another predictor causes the models to fit different coefficients even when the new predictor is continuous. So, it is something to do with the orthogonality of the dummy variables I created in my original example? rs1 = glm(dist ~ randomCats1 + speed, data=cars, family="poisson")
rs2 = glm.nb(dist ~ randomCats1 + speed, data=cars)
#compare coefficients
cbind("Poisson"=coef(rs1), "NB"=coef(rs2)) | You have discovered an intimate, but generic, property of GLMs fit by maximum likelihood . The result drops out once one considers the simplest case of all: Fitting a single parameter to a single observation! One sentence answer : If all we care about is fitting separate means to disjoint subsets of our sample, then GLMs will always yield $\hat\mu_j = \bar y_j$ for each subset $j$, so the actual error structure and parametrization of the density both become irrelevant to the (point) estimation! A bit more : Fitting orthogonal categorical factors by maximum likelihood is equivalent to fitting separate means to disjoint subsets of our sample, so this explains why Poisson and negative binomial GLMs yield the same parameter estimates. Indeed, the same happens whether we use Poisson, negbin, Gaussian, inverse Gaussian or Gamma regression (see below). In the Poisson and negbin case, the default link function is the $\log$ link, but that is a red herring; while this yields the same raw parameter estimates, we'll see below that this property really has nothing to do with the link function at all. When we are interested in a parametrization with more structure, or that depends on continuous predictors, then the assumed error structure becomes relevant due to the mean-variance relationship of the distribution as it relates to the parameters and the nonlinear function used for modeling the conditional means. GLMs and exponential dispersion families: Crash course An exponential dispersion family in natural form is one such that the log density is of the form
$$
\log f(y;\,\theta,\nu) = \frac{\theta y - b(\theta)}{\nu} + a(y,\nu) \>.
$$ Here $\theta$ is the natural parameter and $\nu$ is the dispersion parameter . If $\nu$ were known, this would just be a standard one-parameter exponential family. All the GLMs considered below assume an error model from this family. Consider a sample of a single observation from this family. If we fit $\theta$ by maximum likelihood, we get that $y = b'(\hat\theta)$, irrespective of the value of $\nu$. This readily extends to the case of an iid sample since the log likelihoods add, yielding $\bar y = b'(\hat\theta)$. But, we also know, due to the nice regularity of the log density as a function of $\theta$, that
$$
\frac{\partial}{\partial \theta} \mathbb E \log f(Y;\theta,\nu) = \mathbb E \frac{\partial}{\partial \theta} \log f(Y;\theta,\nu) = 0 \>.
$$
So, in fact $b'(\theta) = \mathbb E Y = \mu$. Since maximum likelihood estimates are invariant under transformations, this means that
$
\bar y = \hat\mu
$
for this family of densities. Now, in a GLM, we model $\mu_i$ as $\mu_i = g^{-1}(\mathbf x_i^T \beta)$ where $g$ is the link function. But if $\mathbf x_i$ is a vector of all zeros except for a single 1 in position $j$, then $\mu_i = g(\beta_j)$. The likelihood of the GLM then factorizes according to the $\beta_j$'s and we proceed as above. This is precisely the case of orthogonal factors. What's so different about continuous predictors? When the predictors are continuous or they are categorical, but cannot be reduced to an orthogonal form, then the likelihood no longer factors into individual terms with a separate mean depending on a separate parameter. At this point, the error structure and link function do come into play. If one cranks through the (tedious) algebra, the likelihood equations become
$$
\sum_{i=1}^n \frac{(y_i - \mu_i)x_{ij}}{\sigma_i^2}\frac{\partial \mu_i}{\partial \lambda_i} = 0\>,
$$
for all $j = 1,\ldots,p$ where $\lambda_i = \mathbf x_i^T \beta$. Here, the $\beta$ and $\nu$ parameters enter implicitly through the link relationship $\mu_i = g(\lambda_i) = g(\mathbf x_i^T \beta)$ and variance $\sigma_i^2$. In this way, the link function and assumed error model become relevant to the estimation. Example: The error model (almost) doesn't matter In the example below, we generate negative binomial random data depending on three categorical factors. Each observation comes from a single category and the same dispersion parameter ($k = 6$) is used. We then fit to these data using five different GLMs, each with a $\log$ link: ( a ) negative binomial, ( b ) Poisson, ( c ) Gaussian, ( d ) Inverse Gaussian and ( e ) Gamma GLMs. All of these are examples of exponential dispersion families. From the table, we can see that the parameter estimates are identical , even though some of these GLMs are for discrete data and others are for continuous,and some are for nonnegative data while others are not. negbin poisson gaussian invgauss gamma
XX1 4.234107 4.234107 4.234107 4.234107 4.234107
XX2 4.790820 4.790820 4.790820 4.790820 4.790820
XX3 4.841033 4.841033 4.841033 4.841033 4.841033 The caveat in the heading comes from the fact that the fitting procedure will fail if the observations don't fall within the domain of the particular density. For example, if we had $0$ counts randomly generated in the data above, then the Gamma GLM would fail to converge since Gamma GLMs require strictly positive data. Example: The link function (almost) doesn't matter Using the same data, we repeat the procedure fitting the data with a Poisson GLM with three different link functions: ( a ) $\log$ link, ( b ) identity link and ( c ) square-root link. The table below shows the coefficient estimates after converting back to the log parameterization. (So, the second column showns $\log(\hat \beta)$ and the third shows $\log(\hat \beta^2)$ using the raw $\hat\beta$ from each of the fits). Again, the estimates are identical. > coefs.po
log id sqrt
XX1 4.234107 4.234107 4.234107
XX2 4.790820 4.790820 4.790820
XX3 4.841033 4.841033 4.841033 The caveat in the heading simply refers to the fact that the raw estimates will vary with the link function, but the implied mean-parameter estimates will not. R code # Warning! This code is a bit simplified for compactness.
library(MASS)
n <- 5
m <- 3
set.seed(17)
b <- exp(5+rnorm(m))
k <- 6
# Random negbin data; orthogonal factors
y <- rnbinom(m*n, size=k, mu=rep(b,each=n))
X <- factor(paste("X",rep(1:m,each=n),sep=""))
# Fit a bunch of GLMs with a log link
con <- glm.control(maxit=100)
mnb <- glm(y~X+0, family=negative.binomial(theta=2))
mpo <- glm(y~X+0, family="poisson")
mga <- glm(y~X+0, family=gaussian(link=log), start=rep(1,m), control=con)
miv <- glm(y~X+0, family=inverse.gaussian(link=log), start=rep(2,m), control=con)
mgm <- glm(y~X+0, family=Gamma(link=log), start=rep(1,m), control=con)
coefs <- cbind(negbin=mnb$coef, poisson=mpo$coef, gaussian=mga$coef
invgauss=miv$coef, gamma=mgm$coef)
# Fit a bunch of Poisson GLMs with different links.
mpo.log <- glm(y~X+0, family=poisson(link="log"))
mpo.id <- glm(y~X+0, family=poisson(link="identity"))
mpo.sqrt <- glm(y~X+0, family=poisson(link="sqrt"))
coefs.po <- cbind(log=mpo$coef, id=log(mpo.id$coef), sqrt=log(mpo.sqrt$coef^2)) | {
"source": [
"https://stats.stackexchange.com/questions/71519",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11511/"
]
} |
71,782 | Ok, this is a question that keeps me up at night. Can the bootstrap procedure be interpreted as approximating some Bayesian procedure (except for the Bayesian bootstrap)? I really like the Bayesian "interpretation" of statistics which I find nicely coherent and easy to understand. However, I also have a weakness for the bootstrap procedure which is so simple, yet delivers reasonable inferences in many situations. I would be more happy with bootstrapping, however, if I knew that the bootstrap was approximating a posterior distribution in some sense. I know of the "Bayesian bootstrap" (Rubin, 1981), but from my perspective that version of the bootstrap is as problematic as the standard bootstrap. The problem is the really peculiar model assumption that you make, both when doing the classical and the Bayesian bootstrap, that is, the possible values of the distribution are only the values I've already seen. How can these strange model assumptions still yield the very reasonable inferences that bootstrap procedures yield? I have been looking for articles that have investigated the properties of the bootstrap (e.g. Weng, 1989) but I haven't found any clear explanation that I'm happy with. References Donald B. Rubin (1981). The Bayesian Bootstrap. Ann. Statist. Volume 9, Number 1 , 130-134. Chung-Sing Weng (1989). On a Second-Order Asymptotic Property of the Bayesian Bootstrap Mean. The Annals of Statistics , Vol. 17, No. 2 , pp. 705-710. | Section 8.4 of The Elements of Statistical Learning by Hastie, Tibshirani, and Friedman is "Relationship Between the Bootstrap and Bayesian Inference." That might be just what you are looking for. I believe that this book is freely available through a Stanford website, although I don't have the link on hand. Edit: Here is a link to the book, which the authors have made freely available online: http://www-stat.stanford.edu/~tibs/ElemStatLearn/ On page 272, the authors write: In this sense, the bootstrap distribution represents an (approximate)
nonparametric, noninformative posterior distribution for our
parameter. But this bootstrap distribution is obtained
painlessly — without having to formally specify a prior and without
having to sample from the posterior distribution. Hence we might think
of the bootstrap distribution as a “poor man’s” Bayes posterior. By
perturbing the data, the bootstrap approximates the Bayesian effect
of perturbing the parameters, and is typically much simpler to carry
out. One more piece of the puzzle is found in this cross validated question which mentions the Dvoretzky–Kiefer–Wolfowitz inequality that "shows [...] that the empirical distribution function converges uniformly to the true distribution function exponentially fast in probability." So all in all the non-parametric bootstrap could be seen as an asymptotic method that produces "an (approximate) nonparametric, noninformative posterior distribution for our parameter" and where this approximation gets better "exponentially fast" as the number of samples increases. | {
"source": [
"https://stats.stackexchange.com/questions/71782",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6920/"
]
} |
71,946 | Is it possible to overfit a logistic regression model?
I saw a video saying that if my area under the ROC curve is higher than 95%, then its very likely to be over fitted, but is it possible to overfit a logistic regression model? | Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb with the AUC, ever ever ever. What the AUC is is the probability that a randomly sampled positive (or case) will have a higher marker value than a negative (or control) because the AUC is mathematically equivalent to the U statistic. What the AUC is not is a standardized measure of predictive accuracy. Highly deterministic events can have single predictor AUCs of 95% or higher (such as in controlled mechatronics, robotics, or optics), some complex multivariable logistic risk prediction models have AUCs of 64% or lower such as breast cancer risk prediction, and those are respectably high levels of predictive accuracy. A sensible AUC value, as with a power analysis, is prespecified by gathering knowledge of the background and aims of a study apriori . The doctor/engineer describes what they want, and you, the statistician, resolve on a target AUC value for your predictive model. Then begins the investigation. It is indeed possible to overfit a logistic regression model. Aside from linear dependence (if the model matrix is of deficient rank), you can also have perfect concordance, or that is the plot of fitted values against Y perfectly discriminates cases and controls. In that case, your parameters have not converged but simply reside somewhere on the boundary space that gives a likelihood of $\infty$. Sometimes, however, the AUC is 1 by random chance alone. There's another type of bias that arises from adding too many predictors to the model, and that's small sample bias. In general, the log odds ratios of a logistic regression model tend toward a biased factor of $2\beta$ because of non-collapsibility of the odds ratio and zero cell counts. In inference, this is handled using conditional logistic regression to control for confounding and precision variables in stratified analyses. However, in prediction, you're SooL. There is no generalizable prediction when you have $p \gg n \pi(1-\pi)$, ($\pi = \mbox{Prob}(Y=1)$) because you're guaranteed to have modeled the "data" and not the "trend" at that point. High dimensional (large $p$) prediction of binary outcomes is better done with machine learning methods. Understanding linear discriminant analysis, partial least squares, nearest neighbor prediction, boosting, and random forests would be a very good place to start. | {
"source": [
"https://stats.stackexchange.com/questions/71946",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31090/"
]
} |
71,962 | My understanding was that descriptive statistics quantitatively described features of a data sample, while inferential statistics made inferences about the populations from which samples were drawn. However, the wikipedia page for statistical inference states: For the most part, statistical inference makes propositions about
populations, using data drawn from the population of interest via some
form of random sampling. The "for the most part" has made me think I perhaps don't properly understand these concepts. Are there examples of inferential statistics that don't make propositions about populations? | Coming from a behavioural sciences background, I associate this terminology particularly with introductory statistics textbooks. In this context the distinction is that : Descriptive statistics are functions of the sample data that are intrinsically interesting in describing some feature of the data. Classic descriptive statistics include mean, min, max, standard deviation, median, skew, kurtosis. Inferential statistics are a function of the sample data that assists you to draw an inference regarding an hypothesis about a population parameter. Classic inferential statistics include z, t, $\chi^2$, F-ratio, etc. The important point is that any statistic, inferential or descriptive, is a function of the sample data. A parameter is a function of the population, where the term population is the same as saying the underlying data generating process. From this perspective the status of a given function of the data as a descriptive or inferential statistic depends on the purpose for which you are using it. That said, some statistics are clearly more useful in describing relevant features of the data, and some are well suited to aiding inference. Inferential statistics: Standard test statistics like t and z, for a given data generating process, where the null hypothesis is false, the expected value is strongly influenced by sample size. Most researchers would not see such statistics as estimating a population parameter of intrinsic interest. Descriptive statistics : In contrast descriptive statistics do estimate population parameters that are typically of intrinsic interest. For example the sample mean and standard deviation provide estimates of the equivalent population parameters. Even descriptive statistics like the minimum and maximum provide information about equivalent or similar population parameters, although of course in this case, much more care is required. Furthermore, many descriptive statistics might be biased or otherwise less than ideal estimators. However, they still have some utility in estimating a population parameter of interest. So from this perspective, the important things to understand are: statistic : function of the sample data parameter : function of the population (data generating process) estimator : function of the sample data used to provide an estimate of a parameter inference : process of reaching a conclusion about a parameter Thus, you could either define the distinction between descriptive and inferential based on the intention of the researcher using the statistic, or you could define a statistic based on how it is typically used. | {
"source": [
"https://stats.stackexchange.com/questions/71962",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9162/"
]
} |
72,251 | I am starting to dabble with the use of glmnet with LASSO Regression where my outcome of interest is dichotomous. I have created a small mock data frame below: age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7)
gender <- c(1, 0, 1, 1, 1, 0, 1, 0, 0)
bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88)
m_edu <- c(0, 1, 1, 2, 2, 3, 2, 0, 1)
p_edu <- c(0, 2, 2, 2, 2, 3, 2, 0, 0)
f_color <- c("blue", "blue", "yellow", "red", "red", "yellow", "yellow",
"red", "yellow")
asthma <- c(1, 1, 0, 1, 0, 0, 0, 1, 1)
# df is a data frame for further use!
df <- data.frame(age, gender, bmi_p, m_edu, p_edu, f_color, asthma) The columns (variables) in the above dataset are as follows: age (age of child in years) - continuous gender - binary (1 = male; 0 = female) bmi_p (BMI percentile) - continuous m_edu (mother highest education level) - ordinal (0 = less than high school; 1 = high school diploma; 2 = bachelors degree; 3 = post-baccalaureate degree) p_edu (father highest education level) - ordinal (same as m_edu) f_color (favorite primary color) - nominal ("blue", "red", or "yellow") asthma (child asthma status) - binary (1 = asthma; 0 = no asthma) The goal of this example is to make use of LASSO to create a model predicting child asthma status from the list of 6 potential predictor variables ( age , gender , bmi_p , m_edu , p_edu , and f_color ). Obviously the sample size is an issue here, but I am hoping to gain more insight into how to handle the different types of variables (i.e., continuous, ordinal, nominal, and binary) within the glmnet framework when the outcome is binary (1 = asthma; 0 = no asthma). As such, would anyone being willing to provide a sample R script along with explanations for this mock example using LASSO with the above data to predict asthma status? Although very basic, I know I, and likely many others on CV, would greatly appreciate this! | library(glmnet)
age <- c(4, 8, 7, 12, 6, 9, 10, 14, 7)
gender <- as.factor(c(1, 0, 1, 1, 1, 0, 1, 0, 0))
bmi_p <- c(0.86, 0.45, 0.99, 0.84, 0.85, 0.67, 0.91, 0.29, 0.88)
m_edu <- as.factor(c(0, 1, 1, 2, 2, 3, 2, 0, 1))
p_edu <- as.factor(c(0, 2, 2, 2, 2, 3, 2, 0, 0))
f_color <- as.factor(c("blue", "blue", "yellow", "red", "red", "yellow",
"yellow", "red", "yellow"))
asthma <- c(1, 1, 0, 1, 0, 0, 0, 1, 1)
xfactors <- model.matrix(asthma ~ gender + m_edu + p_edu + f_color)[, -1]
x <- as.matrix(data.frame(age, bmi_p, xfactors))
# Note alpha=1 for lasso only and can blend with ridge penalty down to
# alpha=0 ridge only.
glmmod <- glmnet(x, y=as.factor(asthma), alpha=1, family="binomial")
# Plot variable coefficients vs. shrinkage parameter lambda.
plot(glmmod, xvar="lambda") Categorical variables are usually first transformed into factors,
then a dummy variable matrix of predictors is created and along with the continuous predictors, is passed to the model.
Keep in mind, glmnet uses both ridge and lasso penalties, but can be set to either alone. Some results: # Model shown for lambda up to first 3 selected variables.
# Lambda can have manual tuning grid for wider range.
glmmod
# Call: glmnet(x = x, y = as.factor(asthma), family = "binomial", alpha = 1)
#
# Df %Dev Lambda
# [1,] 0 0.00000 0.273300
# [2,] 1 0.01955 0.260900
# [3,] 1 0.03737 0.249000
# [4,] 1 0.05362 0.237700
# [5,] 1 0.06847 0.226900
# [6,] 1 0.08204 0.216600
# [7,] 1 0.09445 0.206700
# [8,] 1 0.10580 0.197300
# [9,] 1 0.11620 0.188400
# [10,] 3 0.13120 0.179800
# [11,] 3 0.15390 0.171600
# ... Coefficients can be extracted from the glmmod. Here shown with 3 variables selected. coef(glmmod)[, 10]
# (Intercept) age bmi_p gender1 m_edu1
# 0.59445647 0.00000000 0.00000000 -0.01893607 0.00000000
# m_edu2 m_edu3 p_edu2 p_edu3 f_colorred
# 0.00000000 0.00000000 -0.01882883 0.00000000 0.00000000
# f_coloryellow
# -0.77207831 Lastly, cross validation can also be used to select lambda. cv.glmmod <- cv.glmnet(x, y=asthma, alpha=1)
plot(cv.glmmod) (best.lambda <- cv.glmmod$lambda.min)
# [1] 0.2732972 | {
"source": [
"https://stats.stackexchange.com/questions/72251",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29068/"
]
} |
72,381 | I have an experimentally observed distribution that looks very similar to a gamma or lognormal distribution. I've read that the lognormal distribution is the maximum entropy probability distribution for a random variate $X$ for which the mean and variance of $\ln(X)$ are fixed. Does the gamma distribution have any similar properties? | As for qualitative differences, the lognormal and gamma are, as you say, quite similar. Indeed, in practice they're often used to model the same phenomena (some people will use a gamma where others use a lognormal). They are both, for example, constant-coefficient-of-variation models (the CV for the lognormal is $\sqrt{e^{\sigma^2} -1}$ , for the gamma it's $1/\sqrt \alpha$ ). [How can it be constant if it depends on a parameter, you ask? It applies when you model the scale (location for the log scale); for the lognormal, the $\mu$ parameter acts as the log of a scale parameter, while for the gamma, the scale is the parameter that isn't the shape parameter (or its reciprocal if you use the shape-rate parameterization). I'll call the scale parameter for the gamma distribution $\beta$ . Gamma GLMs model the mean ( $\mu=\alpha\beta$ ) while holding $\alpha$ constant; in that case $\mu$ is also a scale parameter. A model with varying $\mu$ and constant $\alpha$ or $\sigma$ respectively will have constant CV.] You might find it instructive to look at the density of their logs , which often shows a very clear difference. The log of a lognormal random variable is ... normal. It's symmetric. The log of a gamma random variable is left-skew. Depending on the value of the shape parameter, it may be quite skew or nearly symmetric. Here's an example, with both lognormal and gamma having mean 1 and variance 1/4. The top plot shows the densities (gamma in green, lognormal in blue), and the lower one shows the densities of the logs: (Plotting the log of the density of the logs is also useful. That is, taking a log-scale on the y-axis above) This difference implies that the gamma has more of a tail on the left, and less of a tail on the right; the far right tail of the lognormal is heavier and its left tail lighter. And indeed, if you look at the skewness, of the lognormal and gamma, for a given coefficient of variation, the lognormal is more right skew ( $\text{CV}^3+3\text{CV}$ ) than the gamma ( $2\text{CV}$ ). | {
"source": [
"https://stats.stackexchange.com/questions/72381",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27260/"
]
} |
72,479 | I have read that the sum of Gamma random variables with the same scale parameter is another Gamma random variable. I've also seen the paper by Moschopoulos describing a method for the summation of a general set of Gamma random variables. I have tried implementing Moschopoulos' method but have yet to have success. What does the summation of a general set of Gamma random variables look like? To make this question concrete, what does it look like for: $\text{Gamma}(3,1) + \text{Gamma}(4,2) + \text{Gamma}(5,1)$ If the parameters above are not particularly revealing, please suggest others. | First, combine any sums having the same scale factor : a $\Gamma(n, \beta)$ plus a $\Gamma(m,\beta)$ variate form a $\Gamma(n+m,\beta)$ variate. Next, observe that the characteristic function (cf) of $\Gamma(n, \beta)$ is $(1-i \beta t)^{-n}$ , whence the cf of a sum of these distributions is the product $$\prod_{j} \frac{1}{(1-i \beta_j t)^{n_j}}.$$ When the $n_j$ are all integral, this product expands as a partial fraction into a linear combination of $(1-i \beta_j t)^{-\nu}$ where the $\nu$ are integers between $1$ and $n_j$ . In the example with $\beta_1 = 1, n_1=8$ (from the sum of $\Gamma(3,1)$ and $\Gamma(5,1)$ ) and $\beta_2 = 2, n_2=4$ we find $$\begin{aligned}&\frac{1}{(1-i t)^{8}}\frac{1}{(1- 2i t)^{4}} = \\
&\frac{1}{(t+i)^8}-\frac{8 i}{(t+i)^7}-\frac{40}{(t+i)^6}+\frac{160 i}{(t+i)^5}+\frac{560}{(t+i)^4}-\frac{1792 i}{(t+i)^3}\\
&-\frac{5376}{(t+i)^2}+\frac{15360 i}{t+i}+\frac{256}{(2t+i)^4}+\frac{2048 i}{(2 t+i)^3}-\frac{9216}{(2t+i)^2}-\frac{30720 i}{2t+i}.
\end{aligned}$$ The inverse of taking the cf is the inverse Fourier Transform, which is linear : that means we may apply it term by term. Each term is recognizable as a multiple of the cf of a Gamma distribution and so is readily inverted to yield the PDF . In the example we obtain $$\begin{aligned}
&\frac{e^{-t} t^7}{5040}+\frac{1}{90} e^{-t} t^6+\frac{1}{3} e^{-t} t^5+\frac{20}{3} e^{-t} t^4+\frac{8}{3} e^{-\frac{t}{2}} t^3+\frac{280}{3} e^{-t} t^3\\
&-128 e^{-\frac{t}{2}} t^2+896 e^{-t} t^2+2304 e^{-\frac{t}{2}} t+5376 e^{-t} t-15360 e^{-\frac{t}{2}}+15360 e^{-t}
\end{aligned}$$ for the PDF of the sum. This is a finite mixture of Gamma distributions having scale factors equal to those within the sum and shape factors less than or equal to those within the sum. Except in special cases (where some cancellation might occur), the number of terms is given by the total shape parameter $n_1 + n_2 + \cdots$ (assuming all the $n_j$ are different). As a test, here is a histogram of $10^4$ results obtained by adding independent draws from the $\Gamma(8,1)$ and $\Gamma(4,2)$ distributions. On it is superimposed the graph of $10^4$ times the preceding function. The fit is very good. Moschopoulos carries this idea one step further by expanding the cf of the sum into an infinite series of Gamma characteristic functions whenever one or more of the $n_i$ is non-integral, and then terminates the infinite series at a point where it is reasonably well approximated. | {
"source": [
"https://stats.stackexchange.com/questions/72479",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27260/"
]
} |
72,613 | What is the exact meaning of the subscript notation $\mathbb{E}_X[f(X)]$ in conditional expectations in the framework of measure theory ? These subscripts do not appear in the definition of conditional expectation, but we may see for example in this page of wikipedia . (Note that it wasn't always the case, the same page few months ago). What should be for example the meaning of $\mathbb{E}_X[X+Y]$ with $X\sim\mathcal{N}(0,1)$ and $Y=X+1$ ? | In an expression where more than one random variables are involved, the symbol $E$ alone does not clarify with respect to which random variable is the expected value "taken". For example $$E[h(X,Y)] =\text{?} \int_{-\infty}^{\infty} h(x,y) f_X(x)\,dx$$ or $$E[h(X,Y)] = \text{?} \int_{-\infty}^\infty h(x,y) f_Y(y)\,dy$$ Neither . When many random variables are involved, and there is no subscript in the $E$ symbol, the expected value is taken with respect to their joint distribution: $$E[h(X,Y)] = \int_{-\infty}^\infty \int_{-\infty}^\infty h(x,y) f_{XY}(x,y) \, dx \, dy$$ When a subscript is present... in some cases it tells us on which variable we should condition . So $$E_X[h(X,Y)] = E[h(X,Y)\mid X] = \int_{-\infty}^\infty h(x,y) f_{h(X,Y)\mid X}(h(x,y)\mid x)\,dy $$ Here, we "integrate out" the $Y$ variable, and we are left with a function of $X$ . ...But in other cases, it tells us which marginal density to use for the "averaging" $$E_X[h(X,Y)] = \int_{-\infty}^\infty h(x,y) f_{X}(x) \, dx $$ Here, we "average over" the $X$ variable, and we are left with a function of $Y$ . Rather confusing I would say, but who said that scientific notation is totally free of ambiguity or multiple use? You should look how each author defines the use of such symbols. | {
"source": [
"https://stats.stackexchange.com/questions/72613",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16137/"
]
} |
72,774 | I am trying to get a good grasp on the EM algorithm, to be able to implement and use it. I spent a full day reading the theory and a paper where EM is used to track an aircraft using the position information coming from a radar. Honestly, I don't think I fully understand the underlying idea. Can someone point me to a numerical example showing a few iterations (3-4) of the EM for a simpler problem (like estimating the parameters of a Gaussian distribution or a sequence of a sinusoidal series or fitting a line). Even if someone can point me to a piece of code (with synthetic data), I can try to step through the code. | This is a recipe to learn EM with a practical and (in my opinion) very intuitive 'Coin-Toss' example: Read this short EM tutorial paper by Do and Batzoglou. This is the schema where the coin toss example is explained: You may have question marks in your head, especially regarding where the probabilities in the Expectation step come from. Please have a look at the explanations on this maths stack exchange page . Look at/run this code that I wrote in Python that simulates the solution to the coin-toss problem in the EM tutorial paper of item 1: import numpy as np
import math
import matplotlib.pyplot as plt
## E-M Coin Toss Example as given in the EM tutorial paper by Do and Batzoglou* ##
def get_binomial_log_likelihood(obs,probs):
""" Return the (log)likelihood of obs, given the probs"""
# Binomial Distribution Log PDF
# ln (pdf) = Binomial Coeff * product of probabilities
# ln[f(x|n, p)] = comb(N,k) * num_heads*ln(pH) + (N-num_heads) * ln(1-pH)
N = sum(obs);#number of trials
k = obs[0] # number of heads
binomial_coeff = math.factorial(N) / (math.factorial(N-k) * math.factorial(k))
prod_probs = obs[0]*math.log(probs[0]) + obs[1]*math.log(1-probs[0])
log_lik = binomial_coeff + prod_probs
return log_lik
# 1st: Coin B, {HTTTHHTHTH}, 5H,5T
# 2nd: Coin A, {HHHHTHHHHH}, 9H,1T
# 3rd: Coin A, {HTHHHHHTHH}, 8H,2T
# 4th: Coin B, {HTHTTTHHTT}, 4H,6T
# 5th: Coin A, {THHHTHHHTH}, 7H,3T
# so, from MLE: pA(heads) = 0.80 and pB(heads)=0.45
# represent the experiments
head_counts = np.array([5,9,8,4,7])
tail_counts = 10-head_counts
experiments = zip(head_counts,tail_counts)
# initialise the pA(heads) and pB(heads)
pA_heads = np.zeros(100); pA_heads[0] = 0.60
pB_heads = np.zeros(100); pB_heads[0] = 0.50
# E-M begins!
delta = 0.001
j = 0 # iteration counter
improvement = float('inf')
while (improvement>delta):
expectation_A = np.zeros((len(experiments),2), dtype=float)
expectation_B = np.zeros((len(experiments),2), dtype=float)
for i in range(0,len(experiments)):
e = experiments[i] # i'th experiment
# loglikelihood of e given coin A:
ll_A = get_binomial_log_likelihood(e,np.array([pA_heads[j],1-pA_heads[j]]))
# loglikelihood of e given coin B
ll_B = get_binomial_log_likelihood(e,np.array([pB_heads[j],1-pB_heads[j]]))
# corresponding weight of A proportional to likelihood of A
weightA = math.exp(ll_A) / ( math.exp(ll_A) + math.exp(ll_B) )
# corresponding weight of B proportional to likelihood of B
weightB = math.exp(ll_B) / ( math.exp(ll_A) + math.exp(ll_B) )
expectation_A[i] = np.dot(weightA, e)
expectation_B[i] = np.dot(weightB, e)
pA_heads[j+1] = sum(expectation_A)[0] / sum(sum(expectation_A));
pB_heads[j+1] = sum(expectation_B)[0] / sum(sum(expectation_B));
improvement = ( max( abs(np.array([pA_heads[j+1],pB_heads[j+1]]) -
np.array([pA_heads[j],pB_heads[j]]) )) )
j = j+1
plt.figure();
plt.plot(range(0,j),pA_heads[0:j], 'r--')
plt.plot(range(0,j),pB_heads[0:j])
plt.show() | {
"source": [
"https://stats.stackexchange.com/questions/72774",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30630/"
]
} |
73,032 | When using support vector machine, are there any guidelines on choosing linear kernel vs. nonlinear kernel, like RBF? I once heard that non-linear kernel tends not to perform well once the number of features is large. Are there any references on this issue? | Usually, the decision is whether to use linear or an RBF (aka Gaussian) kernel. There are two main factors to consider: Solving the optimisation problem for a linear kernel is much faster, see e.g. LIBLINEAR. Typically, the best possible predictive performance is better for a nonlinear kernel (or at least as good as the linear one). It's been shown that the linear kernel is a degenerate version of RBF , hence the linear kernel is never more accurate than a properly tuned RBF kernel. Quoting the abstract from the paper I linked: The analysis also indicates that if complete model selection using the Gaussian kernel has been conducted, there is no need to consider linear SVM. A basic rule of thumb is briefly covered in NTU's practical guide to support vector classification (Appendix C). If the number of features is large, one may not need to map data to a higher dimensional space. That is, the nonlinear mapping does not improve the performance.
Using the linear kernel is good enough, and one only searches for the parameter C. Your conclusion is more or less right but you have the argument backwards. In practice, the linear kernel tends to perform very well when the number of features is large (e.g. there is no need to map to an even higher dimensional feature space). A typical example of this is document classification, with thousands of dimensions in input space. In those cases, nonlinear kernels are not necessarily significantly more accurate than the linear one. This basically means nonlinear kernels lose their appeal: they require way more resources to train with little to no gain in predictive performance, so why bother. TL;DR Always try linear first since it is way faster to train (AND test). If the accuracy suffices, pat yourself on the back for a job well done and move on to the next problem. If not, try a nonlinear kernel. | {
"source": [
"https://stats.stackexchange.com/questions/73032",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3269/"
]
} |
73,065 | I've already read all the pages in this site trying to find the answer to my problem but no one seems to be the right one form me... First I explain you the kind of data I'm working with... Let's say that I have an array vector with several names of city, one for each of 300 users. I also have another array vector with scores response to a survey of each user or a continuous value for each user. I would like to know if exist a correlation coefficient that compute the correlation between these two variables so, between a nominal and a numeric/continuous or ordinal variables. I've searched on the Internet and in some pages they suggest to use the contingency coefficient or Cramer's V or Lambda coefficient or Eta . For each of this measure the just say that they could be applied for such data in which we have a nominal variable and interval or numerical variable.
The thing is that searching and searching, trying to understand every one of them, sometime is written or watching the examples that they are reasonable to use them if you have dichotomous nominal variable, except for Cramer's V, other time is not written any requirement for the type of data.
A lot of other pages say that is right to apply regression instead, that is right, but I would just simply like to know if there is a coefficient like pearson/spearman for this kind of data. I also think that is no so properly to use Spearman Correlation coeff since the cities are not sortable. I have also built the function of Cramer'sV and Eta by myself (I'm working with Matlab) but for Eta they don't talk about any p-value to see if the coefficient is statistically significant... In the matlabWorks site there is also a nice toolbox that says to compute eta^2 but the kind of input it needs is not understandable. Is here someone that have done a test like mine? If you need more detail to understand the kind of data I'm using just ask me and I'll try to explain you better. | Nominal vs Interval The most classic "correlation" measure between a nominal and an interval ("numeric") variable is Eta , also called correlation ratio, and equal to the root R-square of the one-way ANOVA (with p-value = that of the ANOVA). Eta can be seen as a symmetric association measure, like correlation, because Eta of ANOVA (with the nominal as independent, numeric as dependent) is equal to Pillai's trace of multivariate regression (with the numeric as independent, set of dummy variables corresponding to the nominal as dependent). A more subtle measure is intraclass correlation coefficient ( ICC ). Whereas Eta grasps only the difference between groups (defined by the nominal variable) in respect to the numeric variable, ICC simultaneously also measures the coordination or agreemant between numeric values inside groups; in other words, ICC (particularly the original unbiased "pairing" ICC version) stays on the level of values while Eta operates on the level of statistics (group means vs group variances). Nominal vs Ordinal The question about "correlation" measure between a nominal and an ordinal variable is less apparent. The reason of the difficulty is that ordinal scale is, by its nature, more "mystic" or "twisted" than interval or nominal scales. No wonder that statistical analyses specially for ordinal data are relatively poorly formulated so far. One way might be to convert your ordinal data into ranks and then compute Eta as if the ranks were interval data. The p-value of such Eta = that of Kruskal-Wallis analysis. This approach seems warranted due to the same reasoning as why Spearman rho is used to correlate two ordinal variables. That logic is "when you don't know the interval widths on the scale, cut the Gordian knot by linearizing any possible monotonicity: go rank the data". Another approach (possibly more rigorous and flexible) would be to use ordinal logistic regression with the ordinal variable as the DV and the nominal one as the IV. The square root of Nagelkerke’s pseudo R-square (with the regression's p-value) is another correlation measure for you. Note that you can experiment with various link functions in ordinal regression. This association is, however, not symmetric: the nominal is assumed independent. Yet another approach might be to find such a monotonic transformation of ordinal data into interval - instead of ranking of the penultimate paragraph - that would maximize R (i.e. Eta ) for you. This is categorical regression (= linear regression with optimal scaling). Still another approach is to perform classification tree , such as CHAID, with the ordinal variable as predictor. This procedure will bin together (hence it is the approach opposite to the previous one) adjacent ordered categories which do not distinguish among categories of the nominal predictand. Then you could rely on Chi-square-based association measures (such as Cramer's V) as if you correlate nominal vs nominal variables. And @Michael in his comment suggests yet one more way - a special coefficient called Freeman's Theta . So, we have arrived so far at these opportunities: (1) Rank, then compute Eta; (2) Use ordinal regression; (3) Use categorical regression ("optimally" transforming ordinal variable into interval); (4) Use classification tree ("optimally" reducing the number of ordered categories); (5) Use Freeman's Theta. | {
"source": [
"https://stats.stackexchange.com/questions/73065",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31614/"
]
} |
73,165 | I have a logistic regression model (fit via glmnet in R with elastic net regularization), and I would like to maximize the difference between true positives and false positives. In order to do this, the following procedure came to mind: Fit standard logistic regression model Using prediction threshold as 0.5, identify all positive predictions Assign weight 1 for positively predicted observations, 0 for all others Fit weighted logistic regression model What would be the flaws with this approach? What would be the correct way to proceed with this problem? The reason for wanting to maximize the difference between the number of true positives and false negatives is due to the design of my application. As part of a class project, I am building a autonomous participant in an online marketplace - if my model predicts it can buy something and sell it later at a higher price, it places a bid. I would like to stick to logistic regression and output binary outcomes (win, lose) based on fixed costs and unit price increments (I gain or lose the same amount on every transaction). A false positive hurts me because it means that I buy something and am unable to sell it for a higher price. However, a false negative doesn't hurt me (only in terms of opportunity cost) because it just means if I didn't buy, but if I had, I would have made money. Similarly, a true positive benefits me because I buy and then sell for a higher price, but a true negative doesn't benefit me because I didn't take any action. I agree that the 0.5 cut-off is completely arbitrary, and when I optimized the model from step 1 on the prediction threshold which yields the highest difference between true/false positives, it turns out to be closer to 0.4. I think this is due to the skewed nature of my data - the ratio between negatives and positives is about 1:3. Right now, I am following the following steps: Split data intto training/test Fit model on training, make predictions in test set and compute difference between true/false positives Fit model on full, make predictions in test set and compute difference between true/false positives The difference between true/false positives is smaller in step #3 than in step #2, despite the training set being a subset of the full set. Since I don't care whether the model in #3 has more true negatives and less false negatives, is there anything I can do without altering the likelihood function itself? | You don't seem to want logistic regression at all. What you say is "I would like to maximize the difference between true positives and false positives." That is a fine objective function, but it is not logistic regression. Let's see what it is. First, some notation. The dependent variable is going to be $Y_i$: \begin{align}
Y_i &= \left\{ \begin{array}{l}
1 \qquad \textrm{Purchase $i$ was profitable}\\
0 \qquad \textrm{Purchase $i$ was un-profitable}
\end{array}
\right.
\end{align} The independent variables (the stuff you use to try to predict whether you should buy) are going to be $X_i$ (a vector). The parameter you are trying to estimate is going to be $\beta$ (a vector). You will predict buy when $X_i\beta>0$. For observation $i$, you predict buy when $X_i\beta>0$ or when the indicator function $\mathbf{1}_{X_i\beta>0}=1$. A true positive happens on observation $i$ when both $Y_i=1$ and $\mathbf{1}_{X_i\beta>0}=1$. A false positive on observation $i$ happens when $Y_i=0$ and $\mathbf{1}_{X_i\beta>0}=1$. You wish to find the $\beta$ which maximizes true positives minus false positives, or:
\begin{equation}
max_\beta \; \sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} - \sum_{i=1}^N (1-Y_i)\cdot\mathbf{1}_{X_i\beta>0}
\end{equation} This is not an especially familiar objective function for estimating a discrete response model, but bear with me while I do a little algebra on the objective function:
\begin{align}
&\sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} - \sum_{i=1}^N (1-Y_i)\cdot\mathbf{1}_{X_i\beta>0}\\
= &\sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} - \sum_{i=1}^N \mathbf{1}_{X_i\beta>0}
+ \sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0}\\
= &\sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} - \sum_{i=1}^N \mathbf{1}_{X_i\beta>0}
+ \sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} \\
& \qquad + \sum_{i=1}^N 1 - \sum_{i=1}^N 1 + \sum_{i=1}^N Y_i - \sum_{i=1}^N Y_i\\
= &\sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} + \sum_{i=1}^N (1-Y_i)(1-\mathbf{1}_{X_i\beta>0}) - \sum_{i=1}^N 1 + \sum_{i=1}^N Y_i \\
\end{align} OK, now notice that the last two terms in that sum are not functions of $\beta$, so we can ignore them in the maximization. Finally, we have just shown that the problem you want to solve, "maximize the difference between true positives and false positives" is the same as this problem:
\begin{equation}
max_\beta \; \sum_{i=1}^N Y_i\cdot\mathbf{1}_{X_i\beta>0} + \sum_{i=1}^N (1-Y_i)(1-\mathbf{1}_{X_i\beta>0})
\end{equation} Now, that estimator has a name! It is named the maximum score estimator. It is a very intuitive way to estimate the parameter of a discrete response model. The parameter is chosen so as to maximize the number of correct predictions. The first term is the number of true positives, and the second term is the number of true negatives. This is a pretty good way to estimate a (binary) discrete response model. The estimator is consistent, for example. (Manski, 1985, J of Econometrics) There are some oddities to this estimator, though. First, it is not unique in small samples. Once you have found one $\beta$ which solves the maximization, then any other $\beta$ which makes the exact same predictions in your dataset will solve the maximization---so, infinitely many $\beta$s close to the one you found. Also, the estimator is not asymptotically normal, and it converges slower than typical maximum likelihood estimators---cube root $N$ instead of root $N$ convergence. (Kim and Pollard, 1990, Ann of Stat) Finally, you can't use bootstrapping to do inference on it. (Abrevaya & Huang, 2005, Econometrica) There are some papers using this estimator though---there is a fun one about predicting results in the NCAA basketball tournament by Caudill, International Journal of Forecasting, April 2003, v. 19, iss. 2, pp. 313-17. An estimator that overcomes most of these problems is Horowitz's smoothed maximum score estimator (Horowitz, 1992, Econometrica and Horowitz, 2002, J of Econometrics). It gives a root-$N$ consistent, asymptotically normal, unique estimator which is amenable to bootstrapping. Horowitz provides example code to implement his estimator on his webpage. | {
"source": [
"https://stats.stackexchange.com/questions/73165",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25339/"
]
} |
73,320 | I am currently writing a paper with several multiple regression analyses. While visualizing univariate linear regression is easy via scatter plots, I was wondering whether there is any good way to visualize multiple linear regressions? I am currently just plotting scatter plots like dependent variable vs. 1st independent variable, then vs. 2nd independent variable, etc. I would really appreciate any suggestions. | There is nothing wrong with your current strategy. If you have a multiple regression model with only two explanatory variables then you could try to make a 3D-ish plot that displays the predicted regression plane, but most software don't make this easy to do. Another possibility is to use a coplot (see also: coplot in R or this pdf ), which can represent three or even four variables, but many people don't know how to read them. Essentially however, if you don't have any interactions, then the predicted marginal relationship between $x_j$ and $y$ will be the same as predicted conditional relationship (plus or minus some vertical shift) at any specific level of your other $x$ variables. Thus, you can simply set all other $x$ variables at their means and find the predicted line $\hat y = \hat\beta_0 + \cdots + \hat\beta_j x_j + \cdots + \hat\beta_p \bar x_p$ and plot that line on a scatterplot of $(x_j, y)$ pairs. Moreover, you will end up with $p$ such plots, although you might not include some of them if you think they are not important. (For example, it is common to have a multiple regression model with a single variable of interest and some control variables, and only present the first such plot). On the other hand, if you do have interactions, then you should figure out which of the interacting variables you are most interested in and plot the predicted relationship between that variable and the response variable, but with several lines on the same plot. The other interacting variable is set to different levels for each of those lines. Typical values would be the mean and $\pm$ 1 SD of the interacting variable. To make this clearer, imagine you have only two variables, $x_1$ and $x_2$ , and you have an interaction between them, and that $x_1$ is the focus of your study, then you might make a single plot with these three lines: \begin{align}
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 - s_{x_2}) + \hat\beta_3 x_1(\bar x_2 - s_{x_2}) \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 \bar x_2 \quad\quad\quad\ + \hat\beta_3 x_1\bar x_2 \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 + s_{x_2}) + \hat\beta_3 x_1(\bar x_2 + s_{x_2})
\end{align} An example plot that's similar (albeit with a binary moderator) can be seen in my answer to Plot regression with interaction in R . | {
"source": [
"https://stats.stackexchange.com/questions/73320",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29321/"
]
} |
73,322 | I am looking at my statistics book and an article online for linear regression and was wondering if anyone can verify that these two equations are entirely different. Consider the equation $\hat{y} = ax + b$ In my book, a and b are : $a = \frac{r \cdot S_{y}}{S_{x}}$ $b = \bar{y} - a\bar{x}$ $r = \sum \frac{(x_{i} - \bar{x})(y_{i} -\bar{y})}{S_{x}S_{y}(n-1)}$ $\displaystyle S_{y} = \sqrt{ \frac{\sum (y_i - \bar{y})^{2}}{(n-1)} }$ From one online article, a and b are: $\displaystyle a = \frac{n \sum x_{i}y_{i} - \sum x_{i} \sum y_{i}}{n \sum x^2_{i} - (\sum x_{i})^2}$ $b = \bar{y} - a\bar{x}$. The a from the online article vaguely looks like covariance in the numerator and the denominator looks like variance but for only one random variable, not two. Can someone explain the discrepancy (if there are any) and construct an argument for my book's choice? I can understand the second formulation mainly because it comes from setting partial derivatives to zero to minimize an objective function and then finding the coefficients a and b. | There is nothing wrong with your current strategy. If you have a multiple regression model with only two explanatory variables then you could try to make a 3D-ish plot that displays the predicted regression plane, but most software don't make this easy to do. Another possibility is to use a coplot (see also: coplot in R or this pdf ), which can represent three or even four variables, but many people don't know how to read them. Essentially however, if you don't have any interactions, then the predicted marginal relationship between $x_j$ and $y$ will be the same as predicted conditional relationship (plus or minus some vertical shift) at any specific level of your other $x$ variables. Thus, you can simply set all other $x$ variables at their means and find the predicted line $\hat y = \hat\beta_0 + \cdots + \hat\beta_j x_j + \cdots + \hat\beta_p \bar x_p$ and plot that line on a scatterplot of $(x_j, y)$ pairs. Moreover, you will end up with $p$ such plots, although you might not include some of them if you think they are not important. (For example, it is common to have a multiple regression model with a single variable of interest and some control variables, and only present the first such plot). On the other hand, if you do have interactions, then you should figure out which of the interacting variables you are most interested in and plot the predicted relationship between that variable and the response variable, but with several lines on the same plot. The other interacting variable is set to different levels for each of those lines. Typical values would be the mean and $\pm$ 1 SD of the interacting variable. To make this clearer, imagine you have only two variables, $x_1$ and $x_2$ , and you have an interaction between them, and that $x_1$ is the focus of your study, then you might make a single plot with these three lines: \begin{align}
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 - s_{x_2}) + \hat\beta_3 x_1(\bar x_2 - s_{x_2}) \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 \bar x_2 \quad\quad\quad\ + \hat\beta_3 x_1\bar x_2 \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 + s_{x_2}) + \hat\beta_3 x_1(\bar x_2 + s_{x_2})
\end{align} An example plot that's similar (albeit with a binary moderator) can be seen in my answer to Plot regression with interaction in R . | {
"source": [
"https://stats.stackexchange.com/questions/73322",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31672/"
]
} |
73,463 | I'm curious about the nature of $\Sigma^{-1}$. Can anybody tell something intuitive about "What does $\Sigma^{-1}$ say about data?" Edit: Thanks for replies After taking some great courses, I'd like to add some points: It is measure of information, i.e., $x^T\Sigma^{-1}x$ is amount of info along the direction $x$. Duality: Since $\Sigma$ is positive definite, so is $\Sigma^{-1}$, so they are dot-product norms, more precisely they are dual norms of each other, so we can derive Fenchel dual for the regularized least squares problem, and do maximization w.r.t dual problem. We can choose either of them, depending on their conditioning. Hilbert space: Columns (and rows) of $\Sigma^{-1}$ and $\Sigma$ span the same space. So there is not any advantage (other that when one of these matrices is ill-conditioned) between representation with $\Sigma^{-1}$ or $\Sigma$ Bayesian Statistics: norm of $\Sigma^{-1}$ plays an important role in the Bayesian statistics. I.e. it determined how much information we have in prior, e.g., when covariance of the prior density is like $\|\Sigma^{-1}\|\rightarrow 0 $ we have non-informative (or probably Jeffreys prior) Frequentist Statistics: It is closely related to Fisher information, using the Cramér–Rao bound. In fact, fisher information matrix (outer product of gradient of log-likelihood with itself) is Cramér–Rao bound it, i.e. $\Sigma^{-1}\preceq \mathcal{F}$ (w.r.t positive semi-definite cone, i.e. w.r.t. concentration ellipsoids). So when $\Sigma^{-1}=\mathcal{F}$ the maximum likelihood estimator is efficient, i.e. maximum information exist in the data, so frequentist regime is optimal. In simpler words, for some likelihood functions (note that functional form of the likelihood purely depend on the probablistic model which supposedly generated data, aka generative model), maximum likelihood is efficient and consistent estimator, rules like a boss. (sorry for overkilling it) | It is a measure of precision just as $\Sigma$ is a measure of dispersion. More elaborately, $\Sigma$ is a measure of how the variables are dispersed around the mean (the diagonal elements) and how they co-vary with other variables (the off-diagonal) elements. The more the dispersion the farther apart they are from the mean and the more they co-vary (in absolute value) with the other variables the stronger is the tendency for them to 'move together' (in the same or opposite direction depending on the sign of the covariance). Similarly, $\Sigma^{-1}$ is a measure of how tightly clustered the variables are around the mean (the diagonal elements) and the extent to which they do not co-vary with the other variables (the off-diagonal elements). Thus, the higher the diagonal element, the tighter the variable is clustered around the mean. The interpretation of the off-diagonal elements is more subtle and I refer you to the other answers for that interpretation. | {
"source": [
"https://stats.stackexchange.com/questions/73463",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31645/"
]
} |
73,486 | I have 2 (monthly) time-series that look like this: Economical intuition suggests that they are positively related and I can see this on the plot but if I compute correlation between their log-returns $\ln x_t/x_{t-1}$ and $\ln y_t/y_{t-1}$ this correlation is -0.04 this is basically zero and not statistically significant for my data size (~60 points). How can it be? One may say that series are cointegrated $y_t = a x_t +\varepsilon_t$, but then returns should follow $\Delta y_t = a \Delta x_t +\Delta \varepsilon_t$ and correlation between returns would also be significant. So if I see zero correlation between returns, there is no cointegration between levels as well - right? So does this zero correlation means that there is no relation between series?
If yes - why do they follow each other so closely..
If no - how to quantify this relation if correlation between diff'ed series is ~0 and cointegration tests for original series are inconclusive. EDITS: -- added cointegration -> correlation link to address @AlecosPapadopoulos question. | It is a measure of precision just as $\Sigma$ is a measure of dispersion. More elaborately, $\Sigma$ is a measure of how the variables are dispersed around the mean (the diagonal elements) and how they co-vary with other variables (the off-diagonal) elements. The more the dispersion the farther apart they are from the mean and the more they co-vary (in absolute value) with the other variables the stronger is the tendency for them to 'move together' (in the same or opposite direction depending on the sign of the covariance). Similarly, $\Sigma^{-1}$ is a measure of how tightly clustered the variables are around the mean (the diagonal elements) and the extent to which they do not co-vary with the other variables (the off-diagonal elements). Thus, the higher the diagonal element, the tighter the variable is clustered around the mean. The interpretation of the off-diagonal elements is more subtle and I refer you to the other answers for that interpretation. | {
"source": [
"https://stats.stackexchange.com/questions/73486",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31774/"
]
} |
73,540 | Looking at the Wikipedia definitions of: Mean Squared Error (MSE) Residual Sum of Squares (RSS) It looks to me that $$\text{MSE} = \frac{1}{N} \text{RSS} = \frac{1}{N} \sum (f_i -y_i)^2$$ where $N$ is he number of samples and $f_i$ is our estimation of $y_i$ . However, none of the Wikipedia articles mention this relationship. Why? Am I missing something? | Actually it's mentioned in the Regression section of Mean squared error in Wikipedia: In regression analysis, the term mean squared error is sometimes used
to refer to the unbiased estimate of error variance: the residual sum
of squares divided by the number of degrees of freedom. You can also find some informations here: Errors and residuals in statistics It says the expression mean squared error may have different meanings in different cases, which is tricky sometimes. | {
"source": [
"https://stats.stackexchange.com/questions/73540",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/27838/"
]
} |
73,646 | Suppose I have a sample $(X_n,Y_n), n=1..N$ from the joint distribution of $X$ and $Y$. How do I test the hypothesis that $X$ and $Y$ are independent ? No assumption is made on the joint or marginal distribution laws of $X$ and $Y$ (least of all joint normality, since in that case independence is identical to correlation being $0$). No assumption is made on the nature of a possible relationship between $X$ and $Y$; it may be non-linear, so the variables are uncorrelated ($r=0$) but highly co-dependent ($I=H$). I can see two approaches: Bin both variables and use Fisher's exact test or G-test . Pro: use well-established statistical tests Con: depends on binning Estimate the dependency of $X$ and $Y$: $\frac{I(X;Y)}{H(X,Y)}$ (this is $0$ for independent $X$ and $Y$ and $1$ when they completely determine each other). Pro: produces a number with a clear theoretical meaning Con: depends on the approximate entropy computation (i.e., binning again) Do these approaches make sense? What other methods people use? | This is a very hard problem in general, though your variables are apparently only 1d so that helps. Of course, the first step (when possible) should be to plot the data and see if anything pops out at you; you're in 2d so this should be easy. Here are a few approaches that work in $\mathbb{R}^n$ or even more general settings: As you mentioned, estimate mutual information via entropies. This may be your best option; nearest neighbor-based estimators do okay in low dimensions, and even histograms aren't terrible in 2d. If you're worried about estimation error, this estimator is simple and gives you finite-sample bounds (most others only prove asymptotic properties): Sricharan, Raich, and Hero. Empirical estimation of entropy functionals with confidence. arXiv:1012.4188 [math.ST] Alternatively, there are similar direct estimators for mutual information, e.g. Pál, Póczos, and Svepesári. Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs , NIPS 2010. The Hilbert-Schmidt independence criterion: a kernel (in the sense of RKHS, not KDE)-based approach. Gretton, Bousqet, Smola, and Schölkopf, Measuring Statistical Independence with Hilbert-Schmidt Norms , Algorithmic Learning Theory 2005. The Schweizer-Wolff approach: based on copula transformations, and so is invariant to monotone increasing transformations. I'm not very familiar with this one, but I think it's computationally simpler but also maybe less powerful. Schweizer and Wolff, On Nonparametric Measures of Dependence for Random Variables , Annals of Statistics 1981. | {
"source": [
"https://stats.stackexchange.com/questions/73646",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/13538/"
]
} |
73,869 | What is a suppressor variable in multiple regression and what might be the ways to display suppression effect visually (its mechanics or its evidence in results)? I'd like to invite everybody who has a thought, to share. | There exist a number of frequenly mentioned regressional effects which conceptually are different but share much in common when seen purely statistically (see e.g. this paper "Equivalence of the Mediation, Confounding and Suppression
Effect" by David MacKinnon et al., or Wikipedia articles): Mediator: IV which conveys effect (totally of partly) of another IV
to the DV. Confounder: IV which constitutes or precludes, totally or
partly, effect of another IV to the DV. Moderator: IV which, varying,
manages the strength of the effect of another IV on the DV.
Statistically, it is known as interaction between the two IVs. Suppressor: IV (a mediator or a moderator conceptually) which inclusion
strengthens the effect of another IV on the DV. I'm not going to discuss to what extent some or all of them are technically similar (for that, read the paper linked above). My aim is to try to show graphically what suppressor is. The above definition that "suppressor is a variable which inclusion strengthens the effect of another IV on the DV" seems to me potentially broad because it does not tell anything about mechanisms of such enhancement. Below I'm discussing one mechanism - the only one I consider to be suppression. If there are other mechanisms as well (as for right now, I haven't tried to meditate of any such other) then either the above "broad" definition should be considered imprecise or my definition of suppression should be considered too narrow. Definition (in my understanding) Suppressor is the independent variable which, when added to the model, raises observed R-square mostly due to its accounting for the residuals left by the model without it, and not due to its own association with the DV (which is comparatively weak). We know that the increase in R-square in response to adding a IV is the squared part correlation of that IV in that new model. This way, if the part correlation of the IV with the DV is greater (by absolute value) than the zero-order $r$ between them, that IV is a suppressor. So, a suppressor mostly "suppresses" the error of the reduced model, being weak as a predictor itself. The error term is the complement to the prediction. The prediction is "projected on" or "shared between" the IVs (regression coefficients), and so is the error term ("complements" to the coefficients). The suppressor suppresses such error components unevenly: greater for some IVs, lesser for other IVs. For those IVs "whose" such components it suppresses greatly it lends considerable facilitating aid by actually raising their regression coefficients . Not strong suppressing effects occurs often and wildly (an example on this site). Strong suppression is typically introduced consciously. A researcher seeks for a characteristic which must correlate with the DV as weak as possible and at the same time would correlate with something in the IV of interest which is considered irrelevant, prediction-void, in respect to the DV. He enters it to the model and gets considerable increase in that IV's predictive power. The suppressor's coefficient is typically not interpreted. I could summarize my definition as follows [up on @Jake's answer and @gung's comments]: Formal (statistical) definition: suppressor is IV with part
correlation larger than zero-order correlation (with the dependent). Conceptual (practical) definition: the above formal definition + the zero-order
correlation is small, so that the suppressor is not a sound predictor
itself. "Suppessor" is a role of a IV in a specific model only, not the characteristic of the separate variable. When other IVs are added or removed, the suppressor can suddenly stop suppressing or resume suppressing or change the focus of its suppressing activity. Normal regression situation The first picture below shows a typical regression with two predictors (we'll speak of linear regression). The picture is copied from here where it is explained in more details. In short, moderately correlated (= having acute angle between them) predictors $X_1$ and $X_2$ span 2-dimesional space "plane X". The dependent variable $Y$ is projected onto it orthogonally, leaving the predicted variable $Y'$ and the residuals with st. deviation equal to the length of $e$ . R-square of the regression is the angle between $Y$ and $Y'$ , and the two regression coefficients are directly related to the skew coordinates $b_1$ and $b_2$ , respectively. This situation I've called normal or typical because both $X_1$ and $X_2$ correlate with $Y$ (oblique angle exists between each of the independents and the dependent) and the predictors compete for the prediction because they are correlated. Suppression situation It is shown on the next picture. This one is like the previous; however $Y$ vector now directs somewhat away from the viewer and $X_2$ changed its direction considerably. $X_2$ acts as a suppressor. Note first of all that it hardly correlates with $Y$ . Hence it cannot be a valuable predictor itself. Second. Imagine $X_2$ is absent and you predict only by $X_1$ ; the prediction of this one-variable regression is depicted as $Y^*$ red vector, the error as $e^*$ vector, and the coefficient is given by $b^*$ coordinate (which is the endpoint of $Y^*$ ). Now bring yourself back to the full model and notice that $X_2$ is fairly correlated with $e^*$ . Thus, $X_2$ when introduced in the model, can explain a considerable portion of that error of the reduced model, cutting down $e^*$ to $e$ . This constellation: (1) $X_2$ is not a rival to $X_1$ as a predictor ; and (2) $X_2$ is a dustman to pick up unpredictedness left by $X_1$ , - makes $X_2$ a suppressor . As a result of its effect, predictive strength of $X_1$ has grown to some extent: $b_1$ is larger than $b^*$ . Well, why is $X_2$ called a suppressor to $X_1$ and how can it reinforce it when "suppressing" it? Look at the next picture. It is exactly the same as the previous. Think again of the model with the single predictor $X_1$ . This predictor could of course be decomposed in two parts or components (shown in grey): the part which is "responsible" for prediction of $Y$ (and thus coinciding with that vector) and the part which is "responsible" for the unpredictedness (and thus parallel to $e^*$ ). It is this second part of $X_1$ - the part irrelevant to $Y$ - is suppressed by $X_2$ when that suppressor is added to the model. The irrelevant part is suppressed and thus, given that the suppressor doesn't itself predict $Y$ any much, the relevant part looks stronger. A suppressor is not a predictor but rather a facilitator for another/other predictor/s. Because it competes with what impedes them to predict. Sign of the suppressor's regression coefficient It is the sign of the correlation between the suppressor and the error variable $e^*$ left by the reduced (without-the-suppressor) model. In the depiction above, it is positive. In other settings (for example, revert the direction of $X_2$ ) it could be negative. Suppression example Example data: y x1 x2
1.64454000 .35118800 1.06384500
1.78520400 .20000000 -1.2031500
-1.3635700 -.96106900 -.46651400
.31454900 .80000000 1.17505400
.31795500 .85859700 -.10061200
.97009700 1.00000000 1.43890400
.66438800 .29267000 1.20404800
-.87025200 -1.8901800 -.99385700
1.96219200 -.27535200 -.58754000
1.03638100 -.24644800 -.11083400
.00741500 1.44742200 -.06923400
1.63435300 .46709500 .96537000
.21981300 .34809500 .55326800
-.28577400 .16670800 .35862100
1.49875800 -1.1375700 -2.8797100
1.67153800 .39603400 -.81070800
1.46203600 1.40152200 -.05767700
-.56326600 -.74452200 .90471600
.29787400 -.92970900 .56189800
-1.5489800 -.83829500 -1.2610800 Linear regression results: Observe that $X_2$ served as suppressor. Its zero-order correlation with $Y$ is practically zero but its part correlation is much larger by magnitude, $-.224$ . It strengthened to some extent the predictive force of $X_1$ (from r $.419$ , a would-be beta in simple regression with it, to beta $.538$ in the multiple regression). According to the formal definition, $X_1$ appeared a suppressor too, because its part correlation is greater than its zero-order correlation. But that is because we have only two IV in the simple example. Conceptually, $X_1$ isn't a suppressor because its $r$ with $Y$ is not about $0$ . By way, sum of squared part correlations exceeded R-square: .4750^2+(-.2241)^2 = .2758 > .2256 , which would not occur in normal regressional situation (see the Venn diagram below). Suppression and coefficient's sign change Adding a variable that will serve a supressor may as well as may not change the sign of some other variables' coefficients. "Suppression" and "change sign" effects are not the same thing. Moreover, I believe that a suppressor can never change sign of those predictors whom they serve suppressor. (It would be a shocking discovery to add the suppressor on purpose to facilitate a variable and then to find it having become indeed stronger but in the opposite direction! I'd be thankful if somebody could show me it is possible.) Suppression and coefficient strengthening To cite an earlier passage: "For those IVs "whose" such components [error components] it suppresses greatly the suppressor lends considerable facilitating aid by actually raising their regression coefficients ". Indeed, in our Example above, $X_2$ , the suppressor, raised the coefficient for $X_1$ . Such enhancement of the unique predictive power of another regressor is often the aim of a suppressor to a model but it is not the definition of suppressor or of suppression effect. For, the aforementioned enhancement of another predictor's capacity via adding more regressors can easily occure in a normal regressional situation without those regressors being suppressors. Here is an example. y x1 x2 x3
1 1 1 1
3 2 2 6
2 3 3 5
3 2 4 2
4 3 5 9
3 4 4 2
2 5 3 3
3 6 4 4
4 7 5 5
5 6 6 6
4 5 7 5
3 4 5 5
4 5 3 5
5 6 4 6
6 7 5 4
5 8 6 6
4 2 7 7
5 3 8 8
6 4 9 4
5 5 3 3
4 6 4 2
3 2 1 1
4 3 5 4
5 4 6 5
6 9 5 4
5 8 3 3
3 5 5 2
2 6 6 1
3 7 7 5
5 8 8 8 Regressions results without and with $X_3$ : Inclusion of $X_3$ in the model raised the beta of $X_1$ from $.381$ to $.399$ (and its corresponding partial correlation with $Y$ from $.420$ to $.451$ ). Still, we find no suppressor in the model. $X_3$ 's part correlation ( $.229$ ) is not greater than its zero-order correlation ( $.427$ ). Same is for the other regressors. "Facilitation" effect was there, but not due to "suppression" effect. Definition of a suppessor is different from just strenghtening/facilitation; and it is about picking up mostly errors, due to which the part correlation exceeds the zero-order one. Suppression and Venn diagram Normal regressional situation is often explained with the help of Venn diagram. A+B+C+D = 1, all $Y$ variability. B+C+D area is the variability accounted by the two IV ( $X_1$ and $X_2$ ), the R-square; the remaining area A is the error variability. B+C = $r_{YX_1}^2$ ; D+C = $r_{YX_2}^2$ , Pearson zero-order correlations. B and D are the squared part (semipartial) correlations: B = $r_{Y(X_1.X_2)}^2$ ; D = $r_{Y(X_2.X_1)}^2$ . B/(A+B) = $r_{YX_1.X_2}^2$ and D/(A+D) = $r_{YX_2.X_1}^2$ are the squared partial correlations which have the same basic meaning as the standardized regression coefficients betas. According to the above definition (which I stick to) that a suppressor is the IV with part correlation greater than zero-order correlation, $X_2$ is the suppressor if D area > D+C area. That cannot be displayed on Venn diagram. (It would imply that C from the view of $X_2$ is not "here" and is not the same entity than C from the view of $X_1$ . One must invent perhaps something like multilayered Venn diagram to wriggle oneself to show it.) P.S. Upon finishing my answer I found this answer (by @gung) with a nice simple (schematic) diagram, which seems to be in agreement with what I showed above by vectors. | {
"source": [
"https://stats.stackexchange.com/questions/73869",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/3277/"
]
} |
74,082 | Please explain to me the difference in Bayesian estimate and Maximum likelihood estimate? | It is a very broad question and my answer here only begins to scratch the surface a bit. I will use the Bayes's rule to explain the concepts. Let’s assume that a set of probability distribution parameters, $\theta$, best explains the dataset $D$. We may wish to estimate the parameters $\theta$ with the help of the Bayes’ Rule: $$p(\theta|D)=\frac{p(D|\theta) * p(\theta)}{p(D)}$$ $$posterior = \frac{likelihood * prior}{evidence}$$ The explanations follow: Maximum Likelihood Estimate With MLE,we seek a point value for $\theta$ which maximizes the likelihood, $p(D|\theta)$, shown in the equation(s) above. We can denote this value as $\hat{\theta}$. In MLE, $\hat{\theta}$ is a point estimate, not a random variable. In other words, in the equation above, MLE treats the term $\frac{p(\theta)}{p(D)}$ as a constant and does NOT allow us to inject our prior beliefs, $p(\theta)$, about the likely values for $\theta$ in the estimation calculations. Bayesian Estimate Bayesian estimation, by contrast, fully calculates (or at times approximates) the posterior distribution $p(\theta|D)$. Bayesian inference treats $\theta$ as a random variable. In Bayesian estimation, we put in probability density functions and get out probability density functions, rather than a single point as in MLE. Of all the $\theta$ values made possible by the output distribution $p(\theta|D)$, it is our job to select a value that we consider best in some sense. For example, we may choose the expected value of $\theta$ assuming its variance is small enough. The variance that we can calculate for the parameter $\theta$ from its posterior distribution allows us to express our confidence in any specific value we may use as an estimate. If the variance is too large, we may declare that there does not exist a good estimate for $\theta$. As a trade-off, Bayesian estimation is made complex by the fact that we now have to deal with the denominator in the Bayes' rule, i.e. $evidence$. Here evidence -or probability of evidence- is represented by: $$p(D) = \int_{\theta} p(D|\theta) * p(\theta) d\theta$$ This leads to the concept of 'conjugate priors' in Bayesian estimation. For a given likelihood function, if we have a choice regarding how we express our prior beliefs, we must use that form which allows us to carry out the integration shown above. The idea of conjugate priors and how they are practically implemented are explained quite well in this post by COOlSerdash. | {
"source": [
"https://stats.stackexchange.com/questions/74082",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/32061/"
]
} |
74,499 | People says soft margin SVM use hinge loss function: $\max(0,1-y_i(w^\intercal x_i+b))$. However, the actual objective function that soft margin SVM tries to minimize is
$$
\frac{1}{2}\|w\|^2+C\sum_i\max(0,1-y_i(w^\intercal x_i+b))
$$
Some authors call the $\|w\|^2$ term regularizer and the $\max(0,1-y_i(w^\intercal x_i+b))$ term loss function. However, for hard margin SVM, the whole objective function is just
$$
\frac{1}{2}\|w\|^2
$$
Does that mean hard margin SVM only minimize a regularizer without any loss function? That sounds very strange. Well, if $\frac{1}{2}\|w\|^2$ is the loss function in this case, can we call it quadratic loss function? If so, why the loss function of hard margin SVM becomes regularizer in soft margin SVM and make a change from quadratic loss to hinge loss? | The hinge loss term $\sum_i\max(0,1-y_i(\mathbf{w}^\intercal \mathbf{x}_i+b))$ in soft margin SVM penalizes misclassifications . In hard margin SVM there are, by definition, no misclassifications. This indeed means that hard margin SVM tries to minimize $\|\mathbf{w}\|^2$. Due to the formulation of the SVM problem, the margin is $2/\|\mathbf{w}\|$. As such, minimizing the norm of $\mathbf{w}$ is geometrically equivalent to maximizing the margin. Exactly what we want! Regularization is a technique to avoid overfitting by penalizing large coefficients in the solution vector. In hard margin SVM $\|\mathbf{w}\|^2$ is both the loss function and an $L_2$ regularizer. In soft-margin SVM, the hinge loss term also acts like a regularizer but on the slack variables instead of $\mathbf{w}$ and in $L_1$ rather than $L_2$. $L_1$ regularization induces sparsity, which is why standard SVM is sparse in terms of support vectors (in contrast to least-squares SVM). | {
"source": [
"https://stats.stackexchange.com/questions/74499",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20525/"
]
} |
74,542 | I've been reading Elements of Statistical Learning , and I would like to know why the Lasso provides variable selection and ridge regression doesn't. Both methods minimize the residual sum of squares and have a constraint on the possible values of the parameters $\beta$. For the Lasso, the constraint is $||\beta||_1 \le t$, whereas for ridge it is $||\beta||_2 \le t$, for some $t$. I've seen the diamond vs ellipse picture in the book and I have some intuition as for why the Lasso can hit the corners of the constrained region, which implies that one of the coefficients is set to zero. However, my intuition is rather weak, and I'm not convinced. It should be easy to see, but I don't know why this is true. So I guess I'm looking for a mathematical justification, or an intuitive explanation of why the contours of the residual sum of squares are likely to hit the corners of the $||\beta||_1$ constrained region (whereas this situation is unlikely if the constraint is $||\beta||_2$). | Let's consider a very simple model: $y = \beta x + e$ , with an L1 penalty on $\hat{\beta}$ and a least-squares loss function on $\hat{e}$ . We can expand the expression to be minimized as: $\min y^Ty -2 y^Tx\hat{\beta} + \hat{\beta} x^Tx\hat{\beta} + 2\lambda|\hat{\beta}|$ Keep in mind this is a univariate example, with $\beta$ and $x$ being scalars, to show how LASSO can send a coefficient to zero. This can be generalized to the multivariate case. Let us assume the least-squares solution is some $\hat{\beta} > 0$ , which is equivalent to assuming that $y^Tx > 0$ , and see what happens when we add the L1 penalty. With $\hat{\beta}>0$ , $|\hat{\beta}| = \hat{\beta}$ , so the penalty term is equal to $2\lambda\beta$ . The derivative of the objective function w.r.t. $\hat{\beta}$ is: $-2y^Tx +2x^Tx\hat{\beta} + 2\lambda$ which evidently has solution $\hat{\beta} = (y^Tx - \lambda)/(x^Tx)$ . Obviously by increasing $\lambda$ we can drive $\hat{\beta}$ to zero (at $\lambda = y^Tx$ ). However, once $\hat{\beta} = 0$ , increasing $\lambda$ won't drive it negative, because, writing loosely, the instant $\hat{\beta}$ becomes negative, the derivative of the objective function changes to: $-2y^Tx +2x^Tx\hat{\beta} - 2\lambda$ where the flip in the sign of $\lambda$ is due to the absolute value nature of the penalty term; when $\beta$ becomes negative, the penalty term becomes equal to $-2\lambda\beta$ , and taking the derivative w.r.t. $\beta$ results in $-2\lambda$ . This leads to the solution $\hat{\beta} = (y^Tx + \lambda)/(x^Tx)$ , which is obviously inconsistent with $\hat{\beta} < 0$ (given that the least squares solution $> 0$ , which implies $y^Tx > 0$ , and $\lambda > 0$ ). There is an increase in the L1 penalty AND an increase in the squared error term (as we are moving farther from the least squares solution) when moving $\hat{\beta}$ from $0$ to $ < 0$ , so we don't, we just stick at $\hat{\beta}=0$ . It should be intuitively clear the same logic applies, with appropriate sign changes, for a least squares solution with $\hat{\beta} < 0$ . With the least squares penalty $\lambda\hat{\beta}^2$ , however, the derivative becomes: $-2y^Tx +2x^Tx\hat{\beta} + 2\lambda\hat{\beta}$ which evidently has solution $\hat{\beta} = y^Tx/(x^Tx + \lambda)$ . Obviously no increase in $\lambda$ will drive this all the way to zero. So the L2 penalty can't act as a variable selection tool without some mild ad-hockery such as "set the parameter estimate equal to zero if it is less than $\epsilon$ ". Obviously things can change when you move to multivariate models, for example, moving one parameter estimate around might force another one to change sign, but the general principle is the same: the L2 penalty function can't get you all the way to zero, because, writing very heuristically, it in effect adds to the "denominator" of the expression for $\hat{\beta}$ , but the L1 penalty function can, because it in effect adds to the "numerator". | {
"source": [
"https://stats.stackexchange.com/questions/74542",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/6497/"
]
} |
74,776 | The "Iris" dataset is probably familiar to most people here - it's one of the canonical test data sets and a go-to example dataset for everything from data visualization to machine learning. For example, everyone in this question ended up using it for a discussion of scatterplots separated by treatment. What makes the Iris data set so useful? Just that it was there first? If someone was trying to create a useful example/testing data set, what lessons could they take away from it? | The Iris dataset is deservedly widely used throughout statistical science, especially for illustrating various problems in statistical graphics, multivariate statistics and machine learning. Containing 150 observations, it is small but not trivial. The task it poses of discriminating between three species of Iris from measurements of their petals and sepals is simple but challenging. The data are real data, but apparently of good quality. In principle and in practice, test datasets could be synthetic and that might be necessary or useful to make a point. Nevertheless, few people object to real data. The data were used by the celebrated British statistician Ronald Fisher in 1936. (Later he was knighted and became Sir Ronald.) At least some teachers like the idea of a dataset with a link to someone so well known within the field. The data were originally published by the statistically-minded botanist Edgar Anderson, but that earlier origin does not diminish the association. Using a few famous datasets is one of the traditions we hand down, such as telling each new generation that Student worked for Guinness or that many famous statisticians fell out with each other. That may sound like inertia, but in comparing methods old and new, and in evaluating any method, it is often considered helpful to try them out on known datasets, thus maintaining some continuity in how we assess methods. Last, but not least, the Iris dataset can be enjoyably coupled with pictures of the flowers concerned, as from e.g. the useful Wikipedia entry on the dataset . Note. Do your bit for biological correctness in citing the plants concerned carefully. Iris setosa , Iris versicolor and Iris virginica are three species (not varieties, as in some statistical accounts); their binominals should be presented in italic, as here; and Iris as genus name and the other names indicating particular species should begin with upper and lower case respectively. (EDIT 4 May 2022 In a generally excellent book to hand on machine learning, the Iris data are described in terms of classes, types, kinds and subspecies, but never once correctly from a biological viewpoint. Naturally that sloppiness makes not a jot of difference to the machine learning exposition.) Stebbins (1978) gave an appreciation of Anderson, a distinguished and
idiosyncratic botanist, and comments on the scientific background to
distinguishing three species of the genus Iris . Kleinman (2002)
surveys Anderson's graphical contributions with statistical flavor. See also Unwin and Kleinman (2021). Kleinman, K. 2002.
How graphical innovations assisted Edgar Anderson's discoveries in
evolutionary biology. Chance 15(3): 17-21. Stebbins, G. L. 1978. Edgar Anderson 1897--1969. Biographical Memoir. Washington, DC: National Academy of Sciences. accessible here Unwin, A. and Kleinman, K. 2021. The iris data set: In search of the source of virginica . Significance 18: 26-29. https://doi.org/10.1111/1740-9713.01589 | {
"source": [
"https://stats.stackexchange.com/questions/74776",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5836/"
]
} |
76,059 | I am learning R and have been experimenting with analysis of variance. I have been running both kruskal.test(depVar ~ indepVar, data=df) and anova(lm(depVar ~ indepVar, data=dF)) Is there a practical difference between these two tests? My understanding is that they both evaluate the null hypothesis that the populations have the same mean. | There are differences in the assumptions and the hypotheses that are tested. The ANOVA (and t-test) is explicitly a test of equality of means of values. The Kruskal-Wallis (and Mann-Whitney) can be seen technically as a comparison of the mean ranks . Hence, in terms of original values, the Kruskal-Wallis is more general than a comparison of means: it tests whether the probability that a random observation from each group is equally likely to be above or below a random observation from another group. The real data quantity that underlies that comparison is neither the differences in means nor the difference in medians, (in the two sample case) it is actually the median of all pairwise differences - the between-sample Hodges-Lehmann difference. However if you choose to make some restrictive assumptions, then Kruskal-Wallis can be seen as a test of equality of population means, as well as quantiles (e.g. medians), and indeed a wide variety of other measures. That is, if you assume that the group-distributions under the null hypothesis are the same, and that under the alternative, the only change is a distributional shift (a so called " location-shift alternative "), then it is also a test of equality of population means (and, simultaneously, of medians, lower quartiles, etc). [If you do make that assumption, you can obtain estimates of and intervals for the relative shifts, just as you can with ANOVA. Well, it is also possible to obtain intervals without that assumption, but they're more difficult to interpret.] If you look at the answer here , especially toward the end, it discusses the comparison between the t-test and the Wilcoxon-Mann-Whitney, which (when doing two-tailed tests at least) are the equivalent* of ANOVA and Kruskal-Wallis applied to a comparison of only two samples; it gives a little more detail, and much of that discussion carries over to the Kruskal-Wallis vs ANOVA. * (aside a particular issue that arises with multigroup comparisons where you can have non-transitive pairwise differences) It's not completely clear what you mean by a practical difference. You use them in generally a generally similar way. When both sets of assumptions apply they usually tend to give fairly similar sorts of results, but they can certainly give fairly different p-values in some situations. Edit: Here's an example of the similarity of inference even at small samples -- here's the joint acceptance region for the location-shifts among three groups (the second and third each compared with the first) sampled from normal distributions (with small sample sizes) for a particular data set, at the 5% level: Numerous interesting features can be discerned -- the slightly larger acceptance region for the KW in this case, with its boundary consisting of vertical, horizontal and diagonal straight line segments (it is not hard to figure out why). The two regions tell us very similar things about the parameters of interest here. | {
"source": [
"https://stats.stackexchange.com/questions/76059",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/34560/"
]
} |
76,226 | Consider the following figure from Faraway's Linear Models with R (2005, p. 59). The first plot seems to indicate that the residuals and the fitted values are uncorrelated, as they should be in a homoscedastic linear model with normally distributed errors. Therefore, the second and third plots, which seem to indicate dependency between the residuals and the fitted values, suggest a different model. But why does the second plot suggest, as Faraway notes, a heteroscedastic linear model, while the third plot suggest a non-linear model? The second plot seems to indicate that the absolute value of the residuals is strongly positively correlated with the fitted values, whereas no such trend is evident in the third plot. So if it were the case that, theoretically speaking, in a heteroscedastic linear model with normally distributed errors $$
\mbox{Cor}\left(\mathbf{e},\hat{\mathbf{y}}\right) = \left[\begin{array}{ccc}1 & \cdots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \cdots & 1\end{array}\right]
$$ (where the expression on the left is the variance-covariance matrix between the residuals and the fitted values) this would explain why the second and third plots agree with Faraway's interpretations. But is this the case? If not, how else can Faraway's interpretations of the second and third plots be justified? Also, why does the third plot necessarily indicate non-linearity? Isn't it possible that it is linear, but that the errors are either not normally distributed, or else that they are normally distributed, but do not center around zero? | Below are those residual plots with the approximate mean and spread of points (limits that include most of the values) at each value of fitted (and hence of $x$) marked in - to a rough approximation indicating the conditional mean (red) and conditional mean $\pm$ (roughly!) twice the conditional standard deviation (purple): The second plot shows the mean residual doesn't change with the fitted values (and so is doesn't change with $x$), but the spread of the residuals (and hence of the $y$'s about the fitted line) is increasing as the fitted values (or $x$) changes. That is, the spread is not constant. Heteroskedasticity. the third plot shows that the residuals are mostly negative when the fitted value is small, positive when the fitted value is in the middle and negative when the fitted value is large. That is, the spread is approximately constant, but the conditional mean is not - the fitted line doesn't describe how $y$ behaves as $x$ changes, since the relationship is curved. Isn't it possible that it is linear, but that the errors are either not normally distributed, or else that they are normally distributed, but do not center around zero? Not really*, in those situations the plots look different to the third plot. (i) If the errors were normal but not centered at zero, but at $\theta$, say, then the intercept would pick up the mean error, and so the estimated intercept would be an estimate of $\beta_0+\theta$ (that would be its expected value, but it is estimated with error). Consequently, your residuals would still have conditional mean zero, and so the plot would look like the first plot above. (ii) If the errors are not normally distributed the pattern of dots might be densest somewhere other than the center line (if the data were skewed), say, but the local mean residual would still be near 0. Here the purple lines still represent a (very) roughly 95% interval, but it's no longer symmetric. (I'm glossing over a couple of issues to avoid obscuring the basic point here.) * It's not necessarily impossible -- if you have an "error" term that doesn't really behave like errors - say where $x$ and $y$ are related to them in just the right way - you might be able to produce patterns something like these. However, we make assumptions about the error term, such as that it's not related to $x$, for example, and has zero mean; we'd have to break at least some of those sorts of assumptions to do it. (In many cases you may have reason to conclude that such effects should be absent or at least relatively small.) | {
"source": [
"https://stats.stackexchange.com/questions/76226",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25386/"
]
} |
76,815 | I don't even know if this question makes sense, but what is the difference between multiple regression and partial correlation (apart from the obvious differences between correlation and regression, which is not what I am aiming at)? I want to figure out the following: I have two independent variables ($x_1$, $x_2$) and one dependent variable ($y$). Now individually the independent variables are not correlated with the dependent variable. But for a given $x_1$ $y$ decreases when $x_2$ decreases. So do I analyze that by means of multiple regression or partial correlation ? edit to hopefully improve my question: I am trying to understand the difference between multiple regression and partial correlation. So, when $y$ decreases for a given $x_1$ when $x_2$ decreases, is that due to the combined effect of $x_1$ and $x_2$ on $y$ (multiple regression) or is it due to removing the effect of $x_1$ (partial correlation)? | Multiple linear regression coefficient and partial correlation are directly linked and have the same significance (p-value). Partial r is just another way of standardizing the coefficient, along with beta coefficient (standardized regression coefficient) $^1$ . So, if the dependent variable is $y$ and the independents are $x_1$ and $x_2$ then $$\text{Beta:} \quad \beta_{x_1} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{1-r_{x_1x_2}^2}$$ $$\text{Partial r:} \quad r_{yx_1.x_2} = \frac{r_{yx_1} - r_{yx_2}r_{x_1x_2} }{\sqrt{ (1-r_{yx_2}^2)(1-r_{x_1x_2}^2) }}$$ You see that the numerators are the same which tell that both formulas measure the same unique effect of $x_1$ . I will try to explain how the two formulas are structurally identical and how they are not. Suppose that you have z-standardized (mean 0, variance 1) all three variables. The numerator then is equal to the covariance between two kinds of residuals : the (a) residuals left in predicting $y$ by $x_2$ [both variables standard] and the (b) residuals left in predicting $x_1$ by $x_2$ [both variables standard]. Moreover, the variance of the residuals (a) is $1-r_{yx_2}^2$ ; the variance of the residuals (b) is $1-r_{x_1x_2}^2$ . The formula for the partial correlation then appears clearly the formula of plain Pearson $r$ , as computed in this instance between residuals (a) and residuals (b): Pearson $r$ , we know, is covariance divided by the denominator that is the geometric mean of two different variances. Standardized coefficient beta is structurally like Pearson $r$ , only that the denominator is the geometric mean of a variance with own self . The variance of residuals (a) was not counted; it was replaced by second counting of the variance of residuals (b). Beta is thus the covariance of the two residuals relative the variance of one of them (specifically, the one pertaining to the predictor of interest, $x_1$ ). While partial correlation, as already noticed, is that same covariance relative their hybrid variance. Both types of coefficient are ways to standardize the effect of $x_1$ in the milieu of other predictors. Some numerical consequences of the difference. If R-square of multiple regression of $y$ by $x_1$ and $x_2$ happens to be 1 then both partial correlations of the predictors with the dependent will be also 1 absolute value (but the betas will generally not be 1). Indeed, as said before, $r_{yx_1.x_2}$ is the correlation between the residuals of y <- x2 and the residuals of x1 <- x2 . If what is not $x_2$ within $y$ is exactly what is not $x_2$ within $x_1$ then there is nothing within $y$ that is neither $x_1$ nor $x_2$ : complete fit. Whatever is the amount of the unexplained (by $x_2$ ) portion left in $y$ (the $1-r_{yx_2}^2$ ), if it is captured relatively highly by the independent portion of $x_1$ (by the $1-r_{x_1x_2}^2$ ), the $r_{yx_1.x_2}$ will be high. $\beta_{x_1}$ , on the other hand, will be high only provided that the being captured unexplained portion of $y$ is itself a substantial portion of $y$ . From the above formulas one obtains (and extending from 2-predictor regression to a regression with arbitrary number of predictors $x_1,x_2,x_3,...$ ) the conversion formula between beta and corresponding partial r: $$r_{yx_1.X} = \beta_{x_1} \sqrt{ \frac {\text{var} (e_{x_1 \leftarrow X})} {\text{var} (e_{y \leftarrow X})}},$$ where $X$ stands for the collection of all predictors except the current ( $x_1$ ); $e_{y \leftarrow X}$ are the residuals from regressing $y$ by $X$ , and $e_{x_1 \leftarrow X}$ are the residuals from regressing $x_1$ by $X$ , the variables in both these regressions enter them standardized . Note: if we need to to compute partial correlations of $y$ with every predictor $x$ we usually won't use this formula requiring to do two additional regressions. Rather, the sweep operations (often used in stepwise and all subsets regression algorithms) will be done or anti-image correlation matrix will be computed. $^1$ $\beta_{x_1} = b_{x_1} \frac {\sigma_{x_1}}{\sigma_y}$ is the relation between the raw $b$ and the standardized $\beta$ coefficients in regression with intercept. Addendum. Geometry of regression $beta$ and partial $r$ . On the picture below, a linear regression with two correlated predictors, $X_1$ and $X_2$ , is shown. The three variables, including the dependent $Y$ , are drawn as vectors (arrows). This way of display is different from usual scatterplot (aka variable space display) and is called subject space display. (You may encounter similar drawings locally here , here , here , here , here , here , here and in some other threads.) The pictures are drawn after all the three variables were centered, and so (1) every vector's length = st. deviation of the respective variable, and (2) angle (its cosine) between every two vectors = correlation between the respective variables. $Y'$ is the regression prediction (orthogonal projection of $Y$ onto "plane X" spanned by the regressors); $e$ is the error term; $\cos \angle{Y Y'}={|Y'|}/|Y|$ is the multiple correlation coefficient. The skew coordinates of $Y'$ on the predictors $X1$ and $X2$ relate their multiple regression coefficients . These lengths from the origin are the scaled $b$ 's or $beta$ 's. For example, the magnitude of the skew coordinate onto $X_1$ equals $\beta_1\sigma_Y= b_1\sigma_{X_1}$ ; so, if $Y$ is standardized ( $|Y|=1$ ), the coordinate = $\beta_1$ . See also . But how to obtain an impression of the corresponding partial correlation $r_{yx_1.x_2}$ ? To partial out $X_2$ from the other two variables one has to project them on the plane which is orthogonal to $X_2$ . Below, on the left, this plane perpendicular to $X_2$ has been drawn. It is shown at the bottom - and not on the level of the origin - simply in order not to jam the pic. Let's inspect what's going on in that space. Put your eye to the bottom (of the left pic) and glance up, $X_2$ vector starting right from your eye. All the vectors are now the projections. $X_2$ is a point since the plane was produced as the one perpendicular to it. We look so that "Plane X" is horizontal line to us. Therefore of the four vectors only (the projection of) $Y$ departs the line. From this perspective, $r_{yx_1.x_2}$ is $\cos \alpha$ . It is the angle between the projection vectors of $Y$ and of $X_1$ . On the plane orthogonal to $X_2$ . So it is very simple to understand. Note that $r_{yx_1.x_2}=r_{yy'.x_2}$ , as both $Y'$ and $X_1$ belong to "plane X". We can trace back the projections on the right picture back on the left one. Find that $Y$ on the right pic is $Y\perp$ of the left, which is the residuals of regressing $Y$ by $X_2$ . Likewise, $X_1$ on the right pic is $X_1\perp$ of the left, which is the residuals of regressing $X_1$ by $X_2$ . Correlation between these two residual vectors is $r_{yx_1.x_2}$ , as we know. | {
"source": [
"https://stats.stackexchange.com/questions/76815",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/34927/"
]
} |
76,866 | I have studied algorithms for clustering data (unsupervised learning): EM, and k-means.
I keep reading the following : k-means is a variant of EM, with the assumptions that clusters are
spherical. Can somebody explain the above sentence? I do not understand what spherical means, and how kmeans and EM are related, since one does probabilistic assignment and the other does it in a deterministic way. Also, in which situation is it better to use k-means clustering? or use EM clustering? | K means Hard assign a data point to one particular cluster on convergence. It makes use of the L2 norm when optimizing (Min {Theta} L2 norm point and its centroid coordinates). EM Soft assigns a point to clusters (so it give a probability of any point belonging to any centroid). It doesn't depend on the L2 norm, but is based on the Expectation, i.e., the probability of the point belonging to a particular cluster. This makes K-means biased towards spherical clusters. | {
"source": [
"https://stats.stackexchange.com/questions/76866",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/34950/"
]
} |
76,875 | I have tried reading up on different sources, but I am still not clear what test would be the appropriate in my case. There are three different questions I am asking about my dataset: The subjects are tested for infections from X at different times. I want to know if the proportions of positive for X after is related to the proportion of positive for X before: After
|no |yes|
Before|No |1157|35 |
|Yes |220 |13 |
results of chi-squared test:
Chi^2 = 4.183 d.f. = 1 p = 0.04082
results of McNemar's test:
Chi^2 = 134.2 d.f. = 1 p = 4.901e-31 From my understanding, as the data are repeated measures, I must use McNemar's test, which tests if the proportion of positive for X has changed. But my questions seems to need the chi-squared test - testing if the proportion of positive for X after is related to the proportion of positive for X before. I am not even sure if I understand the difference between McNemar's test and the chi-squared correctly. What would be the right test if my question were, "Is the proportion of subjects infected with X after different from before?" A similar case, but where instead of before and after, I measure two different infections at one point in time: Y
|no |yes|
X|No |1157|35 |
|Yes |220 |13 | Which test would be right here if the question is "Does higher proportions of one infections relate to higher proportions of Y"? If my question was, "Is infection Y at time t2 related to infection X at time t1?", which test would be appropriate? Y at t2
|no |yes|
X at t1|No |1157|35 |
|Yes |220 |13 | I was using McNemar's test in all these cases, but I have my doubts if that is the right test to answer my questions. I am using R. Could I use a binomial glm instead? Would that be analogous to the chi-squared test? | It is very unfortunate that McNemar's test is so difficult for people to understand. I even notice that at the top of its Wikipedia page it states that the explanation on the page is difficult for people to understand. The typical short explanation for McNemar's test is either that it is: 'a within-subjects chi-squared test', or that it is 'a test of the marginal homogeneity of a contingency table'. I find neither of these to be very helpful. First, it is not clear what is meant by 'within-subjects chi-squared', because you are always measuring your subjects twice (once on each variable) and trying to determine the relationship between those variables. In addition, 'marginal homogeneity' is barely intelligible (I know what this means and I have a hard time moving from the words to the meaning). (Tragically, even this answer may be confusing. If it is, it may help to read my second attempt below.) Let's see if we can work through a process of reasoning about your top example to see if we can understand whether (and if so, why) McNemar's test is appropriate. You have put: This is a contingency table, so it connotes a chi-squared analysis. Moreover, you want to understand the relationship between ${\rm Before}$ and ${\rm After}$, and the chi-squared test checks for a relationship between the variables, so at first glance it seems like the chi-squared test must be the analysis that answers your question. However, it is worth pointing out that we can also present these data like so: When you look at the data this way, you might think you could do a regular old $t$-test. But a $t$-test isn't quite right. There are two issues: First, because each row lists data measured from the same subject, we wouldn't want to do a between-subjects $t$-test, we would want to do a within-subjects $t$-test. Second, since these data are distributed as a binomial , the variance is a function of the mean. This means that there is no additional uncertainty to worry about once the sample mean has been estimated (i.e., you don't have to subsequently estimate the variance), so you don't have to refer to the $t$ distribution, you can use the $z$ distribution. (For more on this, it may help to read my answer here: The $z$-test vs. the $\chi^2$ test .) Thus, we would need a within-subjects $z$-test. That is, we need a within-subjects test of equality of proportions. We have seen that there are two different ways of thinking about and analyzing these data (prompted by two different ways of looking at the data). So we need to decide which way we should use. The chi-squared test assesses whether ${\rm Before}$ and ${\rm After}$ are independent. That is, are people who were sick beforehand more likely to be sick afterwards than people who have never been sick. It is extremely difficult to see how that wouldn't be the case given that these measurements are assessed on the same subjects. If you did get a non-significant result (as you almost do) that would simply be a type II error. Instead of whether ${\rm Before}$ and ${\rm After}$ are independent, you almost certainly want to know if the treatment works (a question chi-squared does not answer). This is very similar to any number of treatment vs. control studies where you want to see if the means are equal, except that in this case your measurements are yes/no and they are within-subjects. Consider a more typical $t$-test situation with blood pressure measured before and after some treatment. Those whose bp was above your sample average beforehand will almost certainly tend to be among the higher bps afterwards, but you don't want to know about the consistency of the rankings, you want to know if the treatment led to a change in mean bp. Your situation here is directly analogous. Specifically, you want to run a within-subjects $z$-test of equality of proportions. That is what McNemar's test is. So, having realized that we want to conduct McNemar's test, how does it work? Running a between-subjects $z$-test is easy, but how do we run a within-subjects version? The key to understanding how to do a within-subjects test of proportions is to examine the contingency table, which decomposes the proportions: \begin{array}{rrrrrr}
& &{\rm After} & & & \\
& &{\rm No} &{\rm Yes} & &{\rm total} \\
{\rm Before}&{\rm No} &1157 &35 & &1192 \\
&{\rm Yes} &220 &13 & &233 \\
& & & & & \\
&{\rm total} &1377 &48 & &1425 \\
\end{array}
Obviously the ${\rm Before}$ proportions are the row totals divided by the overall total, and the ${\rm After}$ proportions are the column totals divided by overall total. When we look at the contingency table we can see that those are, for example: $$
\text{Before proportion yes} = \frac{220 + 13}{1425},\quad\quad
\text{After proportion yes} = \frac{35 + 13}{1425}
$$
What is interesting to note here is that $13$ observations were yes both before and after. They end up as part of both proportions, but as a result of being in both calculations they add no distinct information about the change in the proportion of yeses. Moreover they are counted twice, which is invalid. Likewise, the overall total ends up in both calculations and adds no distinct information. By decomposing the proportions we are able to recognize that the only distinct information about the before and after proportions of yeses exists in the $220$ and $35$, so those are the numbers we need to analyze. This was McNemar's insight. In addition, he realized that under the null, this is a binomial test of $220/(220 + 35)$ against a null proportion of $.5$. (There is an equivalent formulation that is distributed as a chi-squared, which is what R outputs.) There is another discussion of McNemar's test, with extensions to contingency tables larger than 2x2, here . Here is an R demo with your data: mat = as.table(rbind(c(1157, 35),
c( 220, 13) ))
colnames(mat) <- rownames(mat) <- c("No", "Yes")
names(dimnames(mat)) = c("Before", "After")
mat
margin.table(mat, 1)
margin.table(mat, 2)
sum(mat)
mcnemar.test(mat, correct=FALSE)
# McNemar's Chi-squared test
#
# data: mat
# McNemar's chi-squared = 134.2157, df = 1, p-value < 2.2e-16
binom.test(c(220, 35), p=0.5)
# Exact binomial test
#
# data: c(220, 35)
# number of successes = 220, number of trials = 255, p-value < 2.2e-16
# alternative hypothesis: true probability of success is not equal to 0.5
# 95 percent confidence interval:
# 0.8143138 0.9024996
# sample estimates:
# probability of success
# 0.8627451 If we didn't take the within-subjects nature of your data into account, we would have a slightly less powerful test of the equality of proportions: prop.test(rbind(margin.table(mat, 1), margin.table(mat, 2)), correct=FALSE)
# 2-sample test for equality of proportions without continuity
# correction
#
# data: rbind(margin.table(mat, 1), margin.table(mat, 2))
# X-squared = 135.1195, df = 1, p-value < 2.2e-16
# alternative hypothesis: two.sided
# 95 percent confidence interval:
# 0.1084598 0.1511894
# sample estimates:
# prop 1 prop 2
# 0.9663158 0.8364912 That is, X-squared = 133.6627 instead of chi-squared = 134.2157 . In this case, these differ very little, because you have a lot of data and only $13$ cases are overlapping as discussed above. (Another, and more important, problem here is that this counts your data twice, i.e., $N = 2850$, instead of $N = 1425$.) Here are the answers to your concrete questions: The correct analysis is McNemar's test (as discussed extensively above). This version is trickier, and the phrasing "does higher proportions of one infections relate to higher proportions of Y" is ambiguous. There are two possible questions: It is perfectly reasonable to want to know if the patients who get one of the infections tend to get the other, in which case you would use the chi-squared test of independence. This question is asking whether susceptibility to the two different infections is independent (perhaps because they are contracted via different physiological pathways) or not (perhaps they are contracted due to a generally weakened immune system). It is also perfectly reasonable to what to know if the same proportion of patients tend to get both infections, in which case you would use McNemar's test. The question here is about whether the infections are equally virulent. Since this is once again the same infection, of course they will be related. I gather that this version is not before and after a treatment, but just at some later point in time. Thus, you are asking if the background infection rates are changing organically, which is again a perfectly reasonable question. At any rate, the correct analysis is McNemar's test. Edit: It would seem I misinterpreted your third question, perhaps due to a typo. I now interpret it as two different infections at two separate timepoints. Under this interpretation, the chi-squared test would be appropriate. | {
"source": [
"https://stats.stackexchange.com/questions/76875",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28472/"
]
} |
76,994 | How could I check if my data e.g. salary is from a continuous exponential distribution in R? Here is histogram of my sample: . Any help will be greatly appreciated! | I would do it by first estimating the only distribution parameter rate using fitdistr . This won't tell you if the distribution fits or not, so you must then use goodness of fit test. For this, you can use ks.test : require(vcd)
require(MASS)
# data generation
ex <- rexp(10000, rate = 1.85) # generate some exponential distribution
control <- abs(rnorm(10000)) # generate some other distribution
# estimate the parameters
fit1 <- fitdistr(ex, "exponential")
fit2 <- fitdistr(control, "exponential")
# goodness of fit test
ks.test(ex, "pexp", fit1 $estimate) # p-value > 0.05 -> distribution not refused
ks.test(control, "pexp", fit2$ estimate) # significant p-value -> distribution refused
# plot a graph
hist(ex, freq = FALSE, breaks = 100, xlim = c(0, quantile(ex, 0.99)))
curve(dexp(x, rate = fit1$estimate), from = 0, col = "red", add = TRUE) From my personal experience (though I have never found it officially anywhere, please confirm or correct me), ks.test will only run if you supply the parameter estimate first. You cannot let it estimate the parameters automatically as e.g. goodfit does it. That's why you need this two step procedure with fitdistr . For more info follow the excellent guide of Ricci: FITTING DISTRIBUTIONS WITH R . | {
"source": [
"https://stats.stackexchange.com/questions/76994",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35010/"
]
} |
77,018 | Short definition of boosting : Can a set of weak learners create a single strong learner? A weak
learner is defined to be a classifier which is only slightly
correlated with the true classification (it can label examples better
than random guessing). Short definition of Random Forest : Random Forests grows many classification trees. To classify a new
object from an input vector, put the input vector down each of the
trees in the forest. Each tree gives a classification, and we say the
tree "votes" for that class. The forest chooses the classification
having the most votes (over all the trees in the forest). Another short definition of Random Forest : A random forest is a meta estimator that fits a number of decision
tree classifiers on various sub-samples of the dataset and use
averaging to improve the predictive accuracy and control over-fitting. As I understand Random Forest is an boosting algorithm which uses trees as its weak classifiers. I know that it also uses other techniques and improves upon them. Somebody corrected me that Random Forest is not a boosting algorithm? Can someone elaborate upon this, why Random Forest is not a boosting algorithm? | Random Forest is a bagging algorithm rather than a boosting algorithm.
They are two opposite way to achieve a low error. We know that error can be composited from bias and variance. A too complex model has low bias but large variance, while a too simple model has low variance but large bias, both leading a high error but two different reasons. As a result, two different ways to solve the problem come into people's mind (maybe Breiman and others), variance reduction for a complex model, or bias reduction for a simple model, which refers to random forest and boosting. Random forest reduces variance of a large number of "complex" models with low bias. We can see the composition elements are not "weak" models but too complex models. If you read about the algorithm, the underlying trees are planted "somewhat" as large as "possible". The underlying trees are independent parallel models. And additional random variable selection is introduced into them to make them even more independent, which makes it perform better than ordinary bagging and entitle the name "random". While boosting reduces bias of a large number of "small" models with low variance. They are "weak" models as you quoted. The underlying elements are somehow like a "chain" or "nested" iterative model about the bias of each level. So they are not independent parallel models but each model is built based on all the former small models by weighting. That is so-called "boosting" from one by one. Breiman's papers and books discuss about trees, random forest and boosting quite a lot. It helps you to understand the principle behind the algorithm. | {
"source": [
"https://stats.stackexchange.com/questions/77018",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7170/"
]
} |
77,166 | In Libre Office Calc, the rand() function is available, which chooses a random value between 0 and 1 from a uniform distribution. I'm a bit rusty on my probability, so when I saw the following behaviour, I was puzzled: A = 200x1 column of rand()^2 B = 200x1 column of rand()*rand() mean(A) = 1/3 mean(B) = 1/4 Why is mean(A) != 1/4 ? | It may be helpful to think of rectangles. Imagine you have the chance to get land for free. The size of the land will be determined by (a) one realization of the random variable or (b) two realizations of the same random variable. In the first case (a), the area will be a square with the side length being equal to the sampled value. In the second case (b), the two sampled values will represent width and length of a rectangle. Which alternative do you choose? Let $\mathbf{U}$ be a realization of a positive random variable. a) The expected value of one realization $\mathbf{U}$ determines the area of the square which is equal to $\mathbf{U}^2$. On average, the size of the area will be
$$\mathop{\mathbb{E}}[\mathbf{U}^2]$$ b) If there are two independent realizations $\mathbf{U}_1$ and $\mathbf{U}_2$, the area will be $\mathbf{U}_1 \cdot \mathbf{U}_2$. On average, the size equals
$$\mathop{\mathbb{E}}[\mathbf{U}_1 \cdot \mathbf{U}_2] = \mathop{\mathbb{E}^2}[\mathbf{U}]$$
since both realizations are from the same distribution and independent. When we calculate the difference between the size of the areas a) and b), we obtain $$\mathop{\mathbb{E}}[\mathbf{U}^2] - \mathop{\mathbb{E}^2}[\mathbf{U}]$$ The above term is identical to $\mathop{\mathbb{Var}}[\mathbf{U}]$ which is inherently greater or equal to $0$. This holds for the general case. In your example, you sampled from the uniform distribution $\mathcal{U}(0,1)$. Hence, $$\mathop{\mathbb{E}}[\mathbf{U}] = \frac{1}{2}$$
$$\mathop{\mathbb{E}^2}[\mathbf{U}] = \frac{1}{4}$$
$$\mathop{\mathbb{Var}}[\mathbf{U}] = \frac{1}{12}$$ With $\mathop{\mathbb{E}}[\mathbf{U}^2] = \mathop{\mathbb{Var}}[\mathbf{U}] + \mathop{\mathbb{E}^2}[\mathbf{U}]$ we obtain
$$\mathop{\mathbb{E}}[\mathbf{U}^2] = \frac{1}{12} + \frac{1}{4} = \frac{1}{3}$$ These values were derived analytically but they match the ones you obtained with the random number generator. | {
"source": [
"https://stats.stackexchange.com/questions/77166",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28134/"
]
} |
77,248 | Can somebody explain autocorrelation function in a time series data? Applying acf to the data, what would be the application? | Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocorrelation function is one of the tools used to find patterns in the data. Specifically, the autocorrelation function tells you the correlation between points separated by various time lags. As an example, here are some possible acf function values for a series with discrete time periods: The notation is ACF(n=number of time periods between points)=correlation between points separated by n time periods. Ill give examples for the first few values of n. ACF(0)=1 (all data are perfectly correlated with themselves), ACF(1)=.9 (the correlation between a point and the next point is 0.9), ACF(2)=.4 (the correlation between a point and a point two time steps ahead is 0.4)...etc. So, the ACF tells you how correlated points are with each other, based on how many time steps they are separated by. That is the gist of autocorrelation, it is how correlated past data points are to future data points, for different values of the time separation. Typically, you'd expect the autocorrelation function to fall towards 0 as points become more separated (i.e. n becomes large in the above notation) because its generally harder to forecast further into the future from a given set of data. This is not a rule, but is typical. Now, to the second part...why do we care? The ACF and its sister function, the partial autocorrelation function (more on this in a bit), are used in the Box-Jenkins/ARIMA modeling approach to determine how past and future data points are related in a time series. The partial autocorrelation function (PACF) can be thought of as the correlation between two points that are separated by some number of periods n, BUT with the effect of the intervening correlations removed. This is important because lets say that in reality, each data point is only directly correlated with the NEXT data point, and none other. However, it will APPEAR as if the current point is correlated with points further into the future, but only due to a "chain reaction" type effect, i.e., T1 is directly correlated with T2 which is directly correlated with T3, so it LOOKs like T1 is directly correlated with T3. The PACF will remove the intervening correlation with T2 so you can better discern patterns. A nice intro to this is here. The NIST Engineering Statistics handbook, online, also has a chapter on this and an example time series analysis using autocorrelation and partial autocorrelation. I won't reproduce it here, but go through it and you should have a much better understanding of autocorrelation. | {
"source": [
"https://stats.stackexchange.com/questions/77248",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16362/"
]
} |
77,350 | A common good practice in Machine Learning is to do feature normalization or data standardization of the predictor variables, that's it, center the data substracting the mean and normalize it dividing by the variance (or standard deviation too). For self containment and to my understanding we do this to achieve two main things: Avoid extra small model weights for the purpose of numerical stability. Ensure quick convergence of optimization algorithms like e.g. Conjugate Gradient so that the large magnitude of one predictor dimension w.r.t. the others doesn't lead to slow convergence. We usually split the data into training, validation and testing sets. In the literature we usually see that to do feature normalization they take the mean and variance (or standard deviation) over the whole set of predictor variables. The big flaw I see here is that if you do that, you are in fact introducing future information into the training predictor variables namely the future information contained in the mean and variance. Therefore, I do feature normalization over the training data and save the mean and variance. Then I apply feature normalization to the predictor variables of the validation and test data sets using the training mean and variances. Are there any fundamental flaws with this? can anyone recommend a better alternative? | Your approach is entirely correct. Although data transformations are often undervalued as "preprocessing", one cannot emphasize enough that transformations in order to optimize model performance can and should be treated as part of the model building process. Reasoning: A model shall be applied on unseen data which is in general not available at the time the model is built. The validation process (including data splitting) simulates this. So in order to get a good estimate of the model quality (and generalization power) one needs to restrict the calculation of the normalization parameters (mean and variance) to the training set. I can only guess why this is not always done in literature. One argument could be, that the calculation of mean and variance is not that sensitive to small data variations (but even this is only true if the basic sample size is large enough and the data is approximately normally distributed without extreme outliers). | {
"source": [
"https://stats.stackexchange.com/questions/77350",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29655/"
]
} |
77,546 | I am trying to fit a multivariate linear regression model with approximately 60 predictor variables and 30 observations, so I am using the glmnet package for regularized regression because p>n. I have been going through documentation and other questions but I still can't interpret the results, here's a sample code (with 20 predictors and 10 observations to simplify): I create a matrix x with num rows = num observations and num cols = num predictors and a vector y which represents the response variable > x=matrix(rnorm(10*20),10,20)
> y=rnorm(10) I fit a glmnet model leaving alpha as default (= 1 for lasso penalty) > fit1=glmnet(x,y)
> print(fit1) I understand I get different predictions with decreasing values of lambda (i.e. penalty) Call: glmnet(x = x, y = y)
Df %Dev Lambda
[1,] 0 0.00000 0.890700
[2,] 1 0.06159 0.850200
[3,] 1 0.11770 0.811500
[4,] 1 0.16880 0.774600
.
.
.
[96,] 10 0.99740 0.010730
[97,] 10 0.99760 0.010240
[98,] 10 0.99780 0.009775
[99,] 10 0.99800 0.009331
[100,] 10 0.99820 0.008907 Now I predict my Beta values choosing, for example, the smallest lambda value given from glmnet > predict(fit1,type="coef", s = 0.008907)
21 x 1 sparse Matrix of class "dgCMatrix"
1
(Intercept) -0.08872364
V1 0.23734885
V2 -0.35472137
V3 -0.08088463
V4 .
V5 .
V6 .
V7 0.31127123
V8 .
V9 .
V10 .
V11 0.10636867
V12 .
V13 -0.20328200
V14 -0.77717745
V15 .
V16 -0.25924281
V17 .
V18 .
V19 -0.57989929
V20 -0.22522859 If instead I choose lambda with cv <- cv.glmnet(x,y)
model=glmnet(x,y,lambda=cv$lambda.min) All of the variables would be (.). Doubts and questions: I am not sure about how to choose lambda. Should I use the non (.) variables to fit another model? In my case I would like to keep as much variables as possible. How do I know the p-value, i.e. which variables significantly predict the response? I apologize for my poor statistical knowledge! And thank you for any help. | Here's an unintuitive fact - you're not actually supposed to give glmnet a single value of lambda. From the documentation here : Do not supply a single value for lambda (for predictions after CV use predict() instead).
Supply instead a decreasing sequence of lambda values. glmnet relies on its
warms starts for speed, and its often faster to fit a whole path than compute a
single fit. cv.glmnet will help you choose lambda, as you alluded to in your examples. The authors of the glmnet package suggest cv$lambda.1se instead of cv$lambda.min , but in practice I've had success with the latter. After running cv.glmnet, you don't have to rerun glmnet! Every lambda in the grid ( cv$lambda ) has already been run. This technique is called "Warm Start" and you can read more about it here . Paraphrasing from the introduction, the Warm Start technique reduces running time of iterative methods by using the solution of a different optimization problem (e.g., glmnet with a larger lambda) as the starting value for a later optimization problem (e.g., glmnet with a smaller lambda). To extract the desired run from cv.glmnet.fit , try this: small.lambda.index <- which(cv$lambda == cv$lambda.min)
small.lambda.betas <- cv$glmnet.fit$beta[, small.lambda.index] Revision (1/28/2017) No need to hack to the glmnet object like I did above; take @alex23lemm's advice below and pass the s = "lambda.min" , s = "lambda.1se" or some other number (e.g., s = .007 ) to both coef and predict . Note that your coefficients and predictions depend on this value which is set by cross validation. Use a seed for reproducibility! And don't forget that if you don't supply an "s" in coef and predict , you'll be using the default of s = "lambda.1se" . I have warmed up to that default after seeing it work better in a small data situation. s = "lambda.1se" also tends to provide more regularization, so if you're working with alpha > 0, it will also tend towards a more parsimonious model. You can also choose a numerical value of s with the help of plot.glmnet to get to somewhere in between (just don't forget to exponentiate the values from the x axis!). | {
"source": [
"https://stats.stackexchange.com/questions/77546",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35264/"
]
} |
77,556 | Suppose you know $Y \sim N(\mu_1, \sigma_1^2)$ or $Y \sim N(\mu_2, \sigma_2^2)$. You observe $Y=y$, some realization of the random variable $Y$. What is the probability that $Y \sim N(\mu_1, \sigma_1^2)$? My intuition is to compare $p$-values from each distribution. Let $p_i$ be the $p$-value for $y$ under $N(\mu_i, \sigma_i^2)$. Here I am thinking of the two-sided $p$-value, $p_i = 2\Phi(-|y-\mu_i|/\sigma_i)$ where $\Phi(x)$ is the standard normal distribution function. I would answer my own question as $p_1/(p_1+p_2)$. But I cannot find any reference that would support this (or even treats this problem). | Here's an unintuitive fact - you're not actually supposed to give glmnet a single value of lambda. From the documentation here : Do not supply a single value for lambda (for predictions after CV use predict() instead).
Supply instead a decreasing sequence of lambda values. glmnet relies on its
warms starts for speed, and its often faster to fit a whole path than compute a
single fit. cv.glmnet will help you choose lambda, as you alluded to in your examples. The authors of the glmnet package suggest cv$lambda.1se instead of cv$lambda.min , but in practice I've had success with the latter. After running cv.glmnet, you don't have to rerun glmnet! Every lambda in the grid ( cv$lambda ) has already been run. This technique is called "Warm Start" and you can read more about it here . Paraphrasing from the introduction, the Warm Start technique reduces running time of iterative methods by using the solution of a different optimization problem (e.g., glmnet with a larger lambda) as the starting value for a later optimization problem (e.g., glmnet with a smaller lambda). To extract the desired run from cv.glmnet.fit , try this: small.lambda.index <- which(cv$lambda == cv$lambda.min)
small.lambda.betas <- cv$glmnet.fit$beta[, small.lambda.index] Revision (1/28/2017) No need to hack to the glmnet object like I did above; take @alex23lemm's advice below and pass the s = "lambda.min" , s = "lambda.1se" or some other number (e.g., s = .007 ) to both coef and predict . Note that your coefficients and predictions depend on this value which is set by cross validation. Use a seed for reproducibility! And don't forget that if you don't supply an "s" in coef and predict , you'll be using the default of s = "lambda.1se" . I have warmed up to that default after seeing it work better in a small data situation. s = "lambda.1se" also tends to provide more regularization, so if you're working with alpha > 0, it will also tend towards a more parsimonious model. You can also choose a numerical value of s with the help of plot.glmnet to get to somewhere in between (just don't forget to exponentiate the values from the x axis!). | {
"source": [
"https://stats.stackexchange.com/questions/77556",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35267/"
]
} |
77,791 | I am confused about the Vector Error Correction Model ( VECM ). Technical background: VECM offers a possibility to apply Vector Autoregressive Model ( VAR ) to integrated multivariate time series. In the textbooks they name some problems in applying a VAR to integrated time series, the most important of which is the so called spurious regression (t-statistics are highly significant and R^2 is high although there is no relation between the variables). The process of estimating the VECM consists roughly of the three following steps, the confusing one of which is for me the first one: Specification and estimation of a VAR model for the integrated multivariate time series Calculate likelihood ratio tests to determine the number of cointegration relations After determining the number of cointegrations, estimate the VECM In the first step one estimates a VAR model with appropriate number of lags (using the usual goodness of fit criteria) and then checks if the residuals correspond to the model assumptions, namely the absence of serial correlation and heteroscedasticity and that the residuals are normally distributed. So, one checks if the VAR model appropriately describes the multivariate time series, and one proceeds to further steps only if it does. And now to my question: If the VAR model describes the data well, why do I need the VECM at all? If my goal is to generate forecasts , isn't it enough to estimate a VAR and check the assumptions, and if they are fulfilled, then just use this model? | The foremost advantage of VECM is that it has nice interpretation with long term and short term equations. In theory VECM is just a representation of cointegrated VAR. This representation is courtesy of Granger's representation theorem. So if you have cointegrated VAR it has VECM representation and vice versa. In practice you need to determine the number of cointegrating relationships. When you fix that number you restrict certain coefficients of VAR model. So advantage of VECM over VAR (which you estimate ignoring VECM) is that the resulting VAR from VECM representation has more efficient coefficient estimates. | {
"source": [
"https://stats.stackexchange.com/questions/77791",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/25944/"
]
} |
77,826 | I have time series data of credit card transaction volumes for different companies. For example: week1: \$5000
week2: \$6000
week3: \$6200
week4: \$7000
week5: \$9000
... Is there a simple method in R to determine if the number series has a linear trend upwards, downwards or like a normal distribution (rising first then dropping)? | The foremost advantage of VECM is that it has nice interpretation with long term and short term equations. In theory VECM is just a representation of cointegrated VAR. This representation is courtesy of Granger's representation theorem. So if you have cointegrated VAR it has VECM representation and vice versa. In practice you need to determine the number of cointegrating relationships. When you fix that number you restrict certain coefficients of VAR model. So advantage of VECM over VAR (which you estimate ignoring VECM) is that the resulting VAR from VECM representation has more efficient coefficient estimates. | {
"source": [
"https://stats.stackexchange.com/questions/77826",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/4679/"
]
} |
77,891 | I ran a repeated design whereby I tested 30 males and 30 females across three different tasks. I want to understand how the behaviour of males and females is different and how that depends on the task. I used both the lmer and lme4 package to investigate this, however, I am stuck with trying to check assumptions for either method. The code I run is lm.full <- lmer(behaviour ~ task*sex + (1|ID/task), REML=FALSE, data=dat)
lm.full2 <-lme(behaviour ~ task*sex, random = ~ 1|ID/task, method="ML", data=dat) I checked if the interaction was the best model by comparing it with the simpler model without the interaction and running an anova: lm.base1 <- lmer(behaviour ~ task+sex+(1|ID/task), REML=FALSE, data=dat)
lm.base2 <- lme(behaviour ~ task+sex, random= ~1|ID/task), method="ML", data=dat)
anova(lm.base1, lm.full)
anova(lm.base2, lm.full2) Q1: Is it ok to use these categorical predictors in a linear mixed model? Q2: Do I understand correctly it is fine the outcome variable ("behaviour") does not need to be normally distributed itself (across sex/tasks)? Q3: How can I check homogeneity of variance? For a simple linear model I use plot(LM$fitted.values,rstandard(LM)) . Is using plot(reside(lm.base1)) sufficient? Q4: To check for normality is using the following code ok? hist((resid(lm.base1) - mean(resid(lm.base1))) / sd(resid(lm.base1)), freq = FALSE); curve(dnorm, add = TRUE) | Q1: Yes - just like any regression model. Q2: Just like general linear models, your outcome variable does not need to be normally distributed as a univariate variable. However, LME models assume that the residuals of the model are normally distributed. So a transformation or adding weights to the model would be a way of taking care of this (and checking with diagnostic plots, of course). Q3: plot(myModel.lme) Q4: qqnorm(myModel.lme, ~ranef(., level=2)) . This code will allow you to make QQ plots for each level of the random effects. LME models assume that not only the within-cluster residuals are normally distributed, but that each level of the random effects are as well. Vary the level from 0, 1, to 2 so that you can check the rat, task, and within-subject residuals. EDIT: I should also add that while normality is assumed and that transformation likely helps reduce problems with non-normal errors/random effects, it's not clear that all problems are actually resolved or that bias isn't introduced. If your data requires a transformation, then be cautious about estimation of the random effects. Here's a paper addressing this . | {
"source": [
"https://stats.stackexchange.com/questions/77891",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/20112/"
]
} |
78,063 | This question was asked by my friend who is not internet savvy. I've no statistics background and I've been searching around internet for this question. The question is : is it possible to replace outliers with mean value? if it's possible, is there any book reference/journals to backup this statement? | Clearly it's possible, but it's not clear that it could ever be a good idea. Let's spell out several ways in which this is a limited or deficient solution: In effect you are saying that the outlier value is completely untrustworthy, to the extent that your only possible guess is that the value should be the mean. If that's what you think, it is likely to be more honest just to omit the observation in question, as evidently you don't have enough information to make a better guess. With nothing else said, you need a criterion or criteria for identifying outliers in the first place (as implied by @Frank Harrell). Otherwise this is an arbitrary and subjective procedure, even if it is defended as a matter of judgment. With some criteria, it is possible that removing outliers in this way creates yet more outliers as a side-effect. An example could be that outliers are more than so many standard deviations away from the mean. Removing an outlier changes the standard deviation, and new data points may now qualify, and so on. Presumably the mean here means the mean of all the other values, a point made explicit by @David Marx. The idea is ambiguous without this stipulation. Using the mean may seem a safe or conservative procedure, but changing a value to the mean will change almost every other statistic, including measures of level, scale and shape and indicators of their uncertainty, a point emphasized by @whuber. The mean may not even be a feasible value: simple examples are when values are integers, but typically the mean isn't an integer. Even with the idea that using a summary measure is a cautious thing to do, using the mean rather than the median or any other measure needs some justification. Whenever there are other variables, modifying the value of one variable without reference to others may make a data point anomalous in other senses. What to do with outliers is an open and very difficult question. Loosely, different solutions and strategies have varying appeal. As a very broad-brush generalisation, there is a continuum of views on outliers in statistics and machine learning from extreme pessimists to extreme optimists. Extreme pessimists feel called to serve as if officers of a Statistical Inquisition, whose duty it is to find outliers as obnoxious contaminants in the data and to deal with them severely. This could be the position, say, of people dealing with financial transactions data, most honest or genuine, but some fraudulent or criminal. Extreme optimists know that outliers are likely, and usually genuine -- the Amazon, or Amazon, is real enough, and really big. Indeed, outliers are often interesting and important and instructive. Floods, fires, and financial crises are what they are, and some are very big. Here is a partial list of possibilities. The ordering is arbitrary and not meant to convey any order in terms of applicability, importance or any other criterion. Nor are these approaches mutually exclusive. One (in my view good) definition is that "[o]utliers are sample values that cause surprise in relation to the majority of the sample" (W.N. Venables and B.D. Ripley. 2002. Modern Applied
Statistics with S. New York: Springer, p.119). However, surprise is in the mind of the beholder and is dependent on some tacit or explicit model of the data. There may be another model under which
the outlier is not surprising at all, so the data really are (say) lognormal or gamma rather than normal. In short, be prepared to (re)consider your model. Go into the laboratory or the field and do the measurement again. Often this is not practicable, but it would seem standard in several sciences. Test whether outliers are genuine. Most of the
tests look pretty contrived to me, but you might find one
that you can believe fits your situation. Irrational faith that a test is appropriate is always needed
to apply a test that is then presented as quintessentially
rational. Throw them out as a matter of judgement. Throw them out using some more-or-less automated (usually not "objective") rule. Ignore them, partially or completely. This could be formal (e.g. trimming) or just a matter of leaving them in the dataset, but omitting them from analyses
as too hot to handle. Pull them in using some kind of adjustment, e.g. Winsorizing. Downplay them by using some other robust estimation method. Downplay them by working on a transformed scale. Downplay them by using a non-identity link function. Accommodate them by fitting some appropriate fat-, long-, or heavy-tailed distribution, without or with predictors. Accommodate by using an indicator or dummy variable as an extra predictor in a model. Side-step the issue by using some non-parametric (e.g. rank-based) procedure. Get a handle on the implied uncertainty using bootstrapping, jackknifing or permutation-based procedure. Edit to replace an outlier with some more
likely value, based on deterministic logic. "An 18-
year-old grandmother is unlikely, but the person in question was born in 1932, and it's now 2013, so presumably is really 81." Edit to replace an impossible or implausible outlier using some imputation method that is currently acceptable not-quite-white magic. Analyse with and without, and seeing how much difference the outlier(s) make(s), statistically, scientifically or practically. Something Bayesian. My prior ignorance of quite what forbids from giving any details. EDIT This second edition benefits from other answers and comments. I've tried to flag my sources of inspiration. | {
"source": [
"https://stats.stackexchange.com/questions/78063",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35394/"
]
} |
78,354 | What exactly is contrast matrix (a term, pertaining to an analysis with categorical predictors) and how exactly is contrast matrix specified? I.e. what are columns, what are rows, what are the constraints on that matrix and what does number in column j and row i mean? I tried to look into the docs and web but it seems that everyone uses it yet there's no definition anywhere. I could backward-engineer the available pre-defined contrasts, but I think the definition should be available without that. > contr.treatment(4)
2 3 4
1 0 0 0
2 1 0 0
3 0 1 0
4 0 0 1
> contr.sum(4)
[,1] [,2] [,3]
1 1 0 0
2 0 1 0
3 0 0 1
4 -1 -1 -1
> contr.helmert(4)
[,1] [,2] [,3]
1 -1 -1 -1
2 1 -1 -1
3 0 2 -1
4 0 0 3
> contr.SAS(4)
1 2 3
1 1 0 0
2 0 1 0
3 0 0 1
4 0 0 0 | In their nice answer, @Gus_est, undertook a mathematical explanation of the essence of the contrast coefficient matrix L (notated there a C ). $\bf Lb=k$ is the fundamental formula for testing hypotheses in univariate general linear modeling (where $\bf b$ are parameters and $\bf k$ are estimable function representing a null hypothesis), and that answer shows some necessary formulas used in modern ANOVA programs. My answer is styled very differently. It is for a data analyst who sees himself rather an "engineer" than a "mathematician", so the answer will be a (superficial) "practical" or "didactic" account and will focus to answer just topics (1) what do the contrast coefficients mean and (2) how can they help to perform ANOVA via linear regression program. ANOVA as regression with dummy variables: introducing contrasts . Let us imagine ANOVA with dependent variable Y and categorical factor A having 3 levels (groups). Let us glance at the ANOVA from the linear regression point of view, that is - via turning the factor into the set of dummy (aka indicator aka treatment aka one-hot ) binary variables. This is our independent set X . (Probably everybody has heard that it is possible to do ANOVA this way - as linear regression with dummy predictors.) Since one of the three groups is redundant, only two dummy variables will enter the linear model. Let's appoint Group3 to be redundant, or reference. The dummy predictors constituting X are an example of contrast variables , i.e. elementary variables representing categories of a factor. X itself is often called design matrix. We can now input the dataset in a multiple linear regression program which will center the data and find the regression coefficients (parameters) $\bf b= (X'X)^{-1}X'y=X^+y$, where "+" designates pseudoinverse. Equivalent pass will be not to do the centering but rather add constant term of the model as the first column of 1 s in X , then estimate the coefficients same way as above $\bf b= (X'X)^{-1}X'y=X^+y$. So far so good. Let us define matrix C to be the aggregation (summarization) of the independent variables design matrix X . It simply shows us the coding scheme observed there, - the contrast coding matrix (= basis matrix): $\bf C= {\it{aggr}} X$. C
Const A1 A2
Gr1 (A=1) 1 1 0
Gr2 (A=2) 1 0 1
Gr3 (A=3,ref) 1 0 0 The colums are the variables (columns) of X - the elementary contrast variables A1 A2, dummy in this instance, and the rows are all the groups/levels of the factor. So was our coding matrix C for indicator or dummy contrast coding scheme. Now, $\bf C^+=L$ is called the contrast coefficient matrix , or L-matrix. Since C is square, $\bf L=C^+=C^{-1}$. The contrast matrix, corresponding to our C - that is for indicator contrasts of our example - is therefore: L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 0 0 1 => Const = Mean_Gr3
A1 1 0 -1 => Param1 = Mean_Gr1-Mean_Gr3
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3 L-matrix is the matrix showing contrast coefficients . Note that sum of contrast coefficients in every row (except row Constant) is $0$. Every such row is called a contrast . Rows correspond to the contrast variables and columns correspond to the groups, factor levels. The significance of contrast coefficients is that they help understand what each effect (each parameter b estimated in the regression with our X , coded as it is) represent in the sense of the difference (the group comparison). We immediately see, following the coefficients, that the estimated Constant will equal the Y mean in the reference group; that parameter b1 (i.e. of dummy variable A1) will equal the difference: Y mean in group1 minus Y mean in group3; and parameter b2 is the difference: mean in group2 minus mean in group3. Note : Saying "mean" right above (and further below) we mean estimated (predicted by the model) mean for a group, not the observed mean in a group. An instructive remark : When we do a regression by binary predictor variables, the parameter of such a variable says about the difference in Y between variable=1 and variable=0 groups. However, in the situation when the binary variables are the set of k-1 dummy variables representing a k -level factor, the meaning of the parameter gets narrower : it shows the difference in Y between variable=1 and (not just variable=0 but even) reference_variable=1 groups. Like $\bf X^+$ (after multiplied by $\bf y$) brings us values of b , similarly $\bf(\it{aggr} \bf X)^+$ brings in meanings of b . OK, we've given the definition of contrast coefficient matrix L . Since $\bf L=C^+=C^{-1}$, symmetrically $\bf C=L^+=L^{-1}$, which means that if you were given or have constructed a contrast matrix L based on categorical factor(s) - to test that L in your analysis, then you have clue for how to code correctly your contrast predictor variables X in order to test the L via an ordinary regression software (i.e. the one processing just "continuous" variables the standard OLS way, and not recognizing categorical factors at all). In our present example the coding was - indicator (dummy) type variables. ANOVA as regression: other contrast types . Let us briefly observe other contrast types (= coding schemes, = parameterization styles) for a categorical factor A . Deviation or effect contrasts . C and L matrices and parameter meaning: C
Const A1 A2
Gr1 (A=1) 1 1 0
Gr2 (A=2) 1 0 1
Gr3 (A=3,ref) 1 -1 -1
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = 1/3Mean_Gr3+1/3Mean_Gr2+1/3Mean_Gr3 = Mean_GU
A1 2/3 -1/3 -1/3 => Param1 = 2/3Mean_Gr1-1/3(Mean_Gr2+Mean_Gr3) = Mean_Gr1-Mean_GU
A2 -1/3 2/3 -1/3 => Param2 = 2/3Mean_Gr2-1/3(Mean_Gr1+Mean_Gr3) = Mean_Gr2-Mean_GU
Parameter for the reference group3 = -(Param1+Param2) = Mean_Gr3-Mean_GU
Mean_GU is grand unweighted mean = 1/3(Mean_Gr1+Mean_Gr2+Mean_Gr3) By deviation coding, each group of the factor is being compared with the unweighted grand mean, while Constant is that grand mean. This is what you get in regression with contrast predictors X coded in deviation or effect "manner". Simple contrasts . This contrasts/coding scheme is a hybrid of indicator and deviation types, it gives the meaning of Constant as in deviation type and the meaning of the other parameters as in indicator type: C
Const A1 A2
Gr1 (A=1) 1 2/3 -1/3
Gr2 (A=2) 1 -1/3 2/3
Gr3 (A=3,ref) 1 -1/3 -1/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = as in Deviation
A1 1 0 -1 => Param1 = as in Indicator
A2 0 1 -1 => Param2 = as in Indicator Helmert contrasts . Compares each group (except reference) with the unweighted mean of the subsequent groups, and Constant is the unweighted grand mean. C and L matrces: C
Const A1 A2
Gr1 (A=1) 1 2/3 0
Gr2 (A=2) 1 -1/3 1/2
Gr3 (A=3,ref) 1 -1/3 -1/2
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 1 -1/2 -1/2 => Param1 = Mean_Gr1-1/2(Mean_Gr2+Mean_Gr3)
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3 Difference or reverse Helmert contrasts . Compares each group (except reference) with the unweighted mean of the previous groups, and Constant is the unweighted grand mean. C
Const A1 A2
Gr1 (A=1) 1 -1/2 -1/3
Gr2 (A=2) 1 1/2 -1/3
Gr3 (A=3,ref) 1 0 2/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 -1 1 0 => Param1 = Mean_Gr2-Mean_Gr1
A2 -1/2 -1/2 1 => Param2 = Mean_Gr3-1/2(Mean_Gr2+Mean_Gr1) Repeated contrasts . Compares each group (except reference) with the next group, and Constant is the unweighted grand mean. C
Const A1 A2
Gr1 (A=1) 1 2/3 1/3
Gr2 (A=2) 1 -1/3 1/3
Gr3 (A=3,ref) 1 -1/3 -2/3
L
Gr1 Gr2 Gr3
(A=1) (A=2) (A=3)
Const 1/3 1/3 1/3 => Const = Mean_GU
A1 1 -1 0 => Param1 = Mean_Gr1-Mean_Gr2
A2 0 1 -1 => Param2 = Mean_Gr2-Mean_Gr3 The Question asks: how exactly is contrast matrix specified? Looking at the types of contrasts outlined so far it is possible to grasp how. Each type has its logic how to "fill in" the values in L . The logic reflects what each parameter means - what are the two combinations of groups it is planned to compare. Polynomial contrasts . These are a bit special, nonlinear. The first effect is a linear one, the second is quadratic, next is cubic. I'm leaving here unaccounted the question how their C and L matrices are to be constructed and if they are the inverse of each other. Please consult with profound @Antoni Parellada's explanations of this type of contrast: 1 , 2 . In balanced designs, Helmert, reverse Helmert, and polynomial contrasts are always orthogonal contrasts . Other types considered above are not orthogonal contrasts. Orthogonal (under balancedness) is the contrast where in contrast matrix L sum in each row (except Const) is zero and sum of products of the corresponding elements of each pair of rows is zero. Here is the angle similarity measures (cosine and Pearson correlation) under different contrast types, except polynomial which I didn't test. Let us have single factor A with k levels, and it was then recoded into the set of k-1 contrast variables of a specific type. What are the values in the correlation or cosine matrix between these contrast variables? Balanced (equal size) groups Unbalanced groups
Contrast type cos corr cos corr
INDICATOR 0 -1/(k-1) 0 varied
DEVIATION .5 .5 varied varied
SIMPLE -1/(k-1) -1/(k-1) varied varied
HELMERT, REVHELMERT 0 0 varied varied
REPEATED varied = varied varied varied
"=" means the two matrices are same while elements in matrix vary I'm giving the table for information and leaving it uncommented. It is of some importance for a deeper glance into general linear modeling. User-defined contrasts . This is what we compose to test a custom comparison hypothesis. Normally sum in every but the first row of L should be 0 which means that two groups or two compositions of groups are being compared in that row (i.e. by that parameter). Where are the model parameters after all ? Are they the rows or the columns of L ? Throughout the text above I was saying that parameters correspond to the rows of L , as the rows represent contrast-variables, the predictors. While the columns are levels of a factor, the groups. That may appear to fall in contradiction with such, for example, theoretical block from @Gus_est answer, where clearly the columns correspond to the parameters: $H_0:
\begin{bmatrix}
0 & 1 & -1 & \phantom{-}0 & \phantom{-}0 \\
0 & 0 & \phantom{-}1 & -1 & \phantom{-}0 \\
0 & 0 & \phantom{-}0 & \phantom{-}1 & -1
\end{bmatrix}
\begin{bmatrix}
\beta_0 \\
\beta_1 \\
\beta_2 \\
\beta_3 \\
\beta_4
\end{bmatrix} =
\begin{bmatrix}
0 \\
0 \\
0
\end{bmatrix}$ Actually, there is no contradiction and the answer to the "problem" is: both rows and columns of the contrast coefficient matrix correspond to the parameters! Just recall that contrasts (contrast variables), the rows, were initially created to represent nothing else than the factor levels: they are the levels except the omitted reference one. Compare please these two equivalent spelling of the L-matrix for the simple contrast: L
Gr1 Gr2 Gr3
A=1 A=2 A=3(reference)
Const 1/3 1/3 1/3
A1 1 0 -1
A2 0 1 -1
L
b0 b1 b2 b3(redundant)
Const A=1 A=2 A=3(reference)
b0 Const 1 1/3 1/3 1/3
b1 A1 0 1 0 -1
b2 A2 0 0 1 -1 The first one is what I've shown before, the second is more "theoretical" (for general linear model algebra) layout. Simply, a column corresponding to Constant term was added. Parameter coefficients b label the rows and columns. Parameter b3, as redundant, will be set to zero. You may pseudoinverse the second layout to get the coding matrix C , where inside in the bottom-right part you will find still the correct codes for contrast variables A1 and A2. That will be so for any contrast type described (except for indicator type - where the pseudoinverse of such rectangular layout won't give correct result; this is probably why simple contrast type was invented for convenience: contrast coefficients identical to indicator type, but for row Constant). Contrast type and ANOVA table results . ANOVA table shows effects as combined (aggregated) - for example main effect of factor A , whereas contrasts correspond to elementary effects, of contrast variables - A1, A2, and (omitted, reference) A3. The parameter estimates for the elementary terms depend on the type of the contrast selected, but the combined result - its mean square and significance level - is the same, whatever the type is. Omnibus ANOVA (say, one-way) null hypothesis that all the three means of A are equal may be put out in a number of equivalent statements, and each will correspond to a specific contrast type: $(\mu_1=\mu_2, \mu_2=\mu_3)$ = repeated type; $(\mu_1=\mu_{23}, \mu_2=\mu_3)$ = Helmert type; $(\mu_1=\mu_{123}, \mu_2=\mu_{123})$ = Deviation type; $(\mu_1=\mu_3, \mu_2=\mu_3)$ = indicator or simple types. ANOVA programs implemented via general linear model paradigm can display both ANOVA table (combined effects: main, interactions) and parameter estimates table (elementary effects b ). Some programs may output the latter table correspondent to the contrast type as bid by the user, but most will output always the parameters correspondent to one type - often, indicator type, because ANOVA programs based on general linear model parameterize specifically dummy variables (most convenient to do) and then switch over for contrasts by special "linking" formulae interpreting the fixed dummy input to a (arbitrary) contrast. Whereas in my answer - showing ANOVA as regression - the "link" is realized as early as at the level of the input X , which called to introduce the notion of the appropriarte coding schema for the data. A few examples showing testing of ANOVA contrasts via usual regression . Showing in SPSS the request a contrast type in ANOVA and getting the same result via linear regression. We have some dataset with Y and factors A (3 levels, reference=last) and B (4 levels, reference=last); find the data below later on. Deviation contrasts example under full factorial model (A, B, A*B). Deviation type requested for both A and B (we might choose to demand different type for each factor, for your information). Contrast coefficient matrix L for A and for B: A=1 A=2 A=3
Const .3333 .3333 .3333
dev_a1 .6667 -.3333 -.3333
dev_a2 -.3333 .6667 -.3333
B=1 B=2 B=3 B=4
Const .2500 .2500 .2500 .2500
dev_b1 .7500 -.2500 -.2500 -.2500
dev_b2 -.2500 .7500 -.2500 -.2500
dev_b3 -.2500 -.2500 .7500 -.2500 Request ANOVA program ( GLM in SPSS) to do analysis of variance and to output explicit results for deviation contrasts: Deviation contrast type compared A=1 vs Grand unweighted Mean and A=2 with that same Mean. Red ellipses ink the difference estimates and their p-values. The combined effect over the factor A is inked by red rectangle. For factor B, everyting is analogously inked in blue. Displaying also the ANOVA table. Note there that the combined contrast effects equal the main effects in it. Let us now create physically contrast variables dev_a1, dev_a2, dev_b1, dev_b2, dev_b3 and run regression. Invert the L -matrices to obtain the coding C matrices: dev_a1 dev_a2
A=1 1.0000 .0000
A=2 .0000 1.0000
A=3 -1.0000 -1.0000
dev_b1 dev_b2 dev_b3
B=1 1.0000 .0000 .0000
B=2 .0000 1.0000 .0000
B=3 .0000 .0000 1.0000
B=4 -1.0000 -1.0000 -1.0000 The column of ones (Constant) is omitted: because we'll use regular regression program (which internally centers variables, and is also intolerant to singularity) variable Constant won't be needed. Now create data X : actually no manual recoding of the factors into these values is needed, the one-stroke solution is $\bf X=DC$, where $\bf D$ is the indicator (dummy) variables, all k columns ( k is the number of levels in a factor). Having created the contrast variables, multiply among those from different factors to get variables to represent interactions (our ANOVA model was full factorial): dev_a1b1, dev_a1b2, dev_a1b3, dev_a2b1, dev_a2b2, dev_a2b3. Then run multiple linear regression with all the predictors. As expected, dev_a1 is the same as effect as was the contrast "Level 1 vs Mean"; dev_a2 is the same as was "Level 2 vs Mean", etc etc, - compare the inked parts with the ANOVA contrast analysis above. Note that if we were not using interaction variables dev_a1b1, dev_a1b2... in regression the results will coincide with results of main-effects-only ANOVA contrast analysis. Simple contrasts example under the same full factorial model (A, B, A*B). Contrast coefficient matrix L for A and for B: A=1 A=2 A=3
Const .3333 .3333 .3333
sim_a1 1.0000 .0000 -1.0000
sim_a2 .0000 1.0000 -1.0000
B=1 B=2 B=3 B=4
Const .2500 .2500 .2500 .2500
sim_b1 1.0000 .0000 .0000 -1.0000
sim_b2 .0000 1.0000 .0000 -1.0000
sim_b3 .0000 .0000 1.0000 -1.0000 ANOVA results for simple contrasts: The overall results (ANOVA table) is the same as with deviation contrasts (not displaying now). Create physically contrast variables sim_a1, sim_a2, sim_b1, sim_b2, sim_b3. The coding matrices by inverting of the L-matrices are (w/o Const column): sim_a1 sim_a2
A=1 .6667 -.3333
A=2 -.3333 .6667
A=3 -.3333 -.3333
sim_b1 sim_b2 sim_b3
B=1 .7500 -.2500 -.2500
B=2 -.2500 .7500 -.2500
B=3 -.2500 -.2500 .7500
B=4 -.2500 -.2500 -.2500 Create the data $\bf X=DC$ and add there the interaction contrast variables sim_a1b1, sim_a1b2, ... etc, as the products of the main effects contrast variables. Perform the regression. As before, we see that the results of regression and ANOVA match. A regression parameter of a simple contrast variable is the difference (and significance test of it) between that level of the factor and the reference (the last, in our example) level of it. The two-factor data used in the examples: Y A B
.2260 1 1
.6836 1 1
-1.772 1 1
-.5085 1 1
1.1836 1 2
.5633 1 2
.8709 1 2
.2858 1 2
.4057 1 2
-1.156 1 3
1.5199 1 3
-.1388 1 3
.4865 1 3
-.7653 1 3
.3418 1 4
-1.273 1 4
1.4042 1 4
-.1622 2 1
.3347 2 1
-.4576 2 1
.7585 2 1
.4084 2 2
1.4165 2 2
-.5138 2 2
.9725 2 2
.2373 2 2
-1.562 2 2
1.3985 2 3
.0397 2 3
-.4689 2 3
-1.499 2 3
-.7654 2 3
.1442 2 3
-1.404 2 3
-.2201 2 4
-1.166 2 4
.7282 2 4
.9524 2 4
-1.462 2 4
-.3478 3 1
.5679 3 1
.5608 3 2
1.0338 3 2
-1.161 3 2
-.1037 3 3
2.0470 3 3
2.3613 3 3
.1222 3 4 User defined contrast example. Let us have single factor F with 5 levels. I will create and test a set of custom orthogonal contrasts, in ANOVA and in regression. The picture shows the process (one of possible) of combining/splitting among the 5 groups to obtain 4 orthogonal contrasts, and the L matrix of contrast coefficints resultant from that process is on the right. All the contrasts are orthogonal to each other: $\bf LL'$ is diagonal. (This example schema was years ago copied from D. Howell's book on Statistics for psychologist.) Let us submit the matrix to SPSS' ANOVA procedure to test the contrasts. Well, we might submit even any one row (contrast) from the matrix, but we'll submit the whole matrix because - as in previous examples - we'll want to receive the same results via regression, and regression program will need the complete set of contrast variables (to be aware that they belong together to one factor!). We'll add the constant row to L, just as we did before, although if we don't need to test for the intercept we may safely omit it. UNIANOVA Y BY F
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/CONTRAST (F)= special
(.2 .2 .2 .2 .2
3 3 -2 -2 -2
1 -1 0 0 0
0 0 2 -1 -1
0 0 0 1 -1)
/DESIGN=F.
Equivalently, we might also use this syntax (with a more flexible /LMATRIX subcommand)
if we omit the Constant row from the matrix.
UNIANOVA Y BY F
/METHOD=SSTYPE(3)
/INTERCEPT=INCLUDE
/LMATRIX= "User contrasts"
F 3 3 -2 -2 -2;
F 1 -1 0 0 0;
F 0 0 2 -1 -1;
F 0 0 0 1 -1
/DESIGN=F. The overall contrasts effect (in the bottom of the pic) is not the same as the expected overall ANOVA effect: but it is simply the artefact of our inserting Constant term into the L matrix. For, SPSS already implies Constant when user-defined contrasts are specified. Remove the constant row from L and we'll get the same contrasts results (matrix K on the pic above) except that L0 contrast won't be displayed. And the overall contrast effect will match the overall ANOVA: OK, now create the contrast variables physically and submit them to regression. $\bf C=L^+$, $\bf X=DC$. C
use_f1 use_f2 use_f3 use_f4
F=1 .1000 .5000 .0000 .0000
F=2 .1000 -.5000 .0000 .0000
F=3 -.0667 .0000 .3333 .0000
F=4 -.0667 .0000 -.1667 .5000
F=5 -.0667 .0000 -.1667 -.5000 Observe the identity of results. The data used in this example: Y F
.2260 1
.6836 1
-1.772 1
-.5085 1
1.1836 1
.5633 1
.8709 1
.2858 1
.4057 1
-1.156 1
1.5199 2
-.1388 2
.4865 2
-.7653 2
.3418 2
-1.273 2
1.4042 2
-.1622 3
.3347 3
-.4576 3
.7585 3
.4084 3
1.4165 3
-.5138 3
.9725 3
.2373 3
-1.562 3
1.3985 3
.0397 4
-.4689 4
-1.499 4
-.7654 4
.1442 4
-1.404 4
-.2201 4
-1.166 4
.7282 4
.9524 5
-1.462 5
-.3478 5
.5679 5
.5608 5
1.0338 5
-1.161 5
-.1037 5
2.0470 5
2.3613 5
.1222 5 Contrasts in other than (M)ANOVA analyses . Wherever nominal predictors appear, the question of contrast (which contrast type to select for which predictor) arise. Some programs solve it behind the scene internally when the overall, omnibus results won't depend on the type selected. If you want a specific type to see more "elementary" results, you have to select. You select (or, rather, compose) a contrast also when you are testing a custom comparison hypothesis. (M)ANOVA and Loglinear analysis, Mixed and sometimes Generalized linear modeling include options to treat predictors via different types of contrasts. But as I've tried to show, it is possible to create contrasts as contrast variables explicitly and by hand. Then, if you don't have ANOVA package at hand, you might do it - in many respects with as good luck - with multiple regression. | {
"source": [
"https://stats.stackexchange.com/questions/78354",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/5509/"
]
} |
78,596 | I recently learned about Fisher's method to combine p-values. This is based on the fact that p-value under the null follows a uniform distribution, and that $$-2\sum_{i=1}^n{\log X_i} \sim \chi^2(2n), \text{ given } X \sim \text{Unif}(0,1)$$
which I think is genius. But my question is why going this convoluted way? and why not (what is wrong with) just using mean of p-values and use central limit theorem? or median? I am trying to understand the genius of RA Fisher behind this grand scheme. | You can perfectly use the mean $p$-value. Fisher’s method set sets a threshold $s_\alpha$ on $-2 \sum_{i=1}^n \log p_i$, such that if the null hypothesis $H_0$ : all $p$-values are $\sim U(0,1)$ holds, then $-2 \sum_i \log p_i$ exceeds $s_\alpha$ with probability $\alpha$. $H_0$ is rejected when this happens. Usually one takes $\alpha = 0.05$ and $s_\alpha$ is given by a quantile of $\chi^2(2n)$. Equivalently, one can work on the product $\prod_i p_i$ which is lower than $e^{-s_\alpha/2}$ with probability $\alpha$.
Here is, for $n=2$, a graph showing the rejection zone (in red) (here we use $s_\alpha = 9.49$. The rejection zone has area = 0.05. Now you can chose to work on ${1\over n} \sum_{i=1}^n p_i$ instead, or equivalently on $\sum_i p_i$. You just need to find a threshold $t_\alpha$ such that $\sum p_i$ is below $t_\alpha$ with probability $\alpha$; exact computation $t_\alpha$ is tedious – for $n$ big enough you can rely on central limit theorem; for $n = 2$, $t_\alpha = (2\alpha)^{1\over 2}$. The following graph shows the rejection zone (area = 0.05 again). As you can imagine, many other shapes for the rejection zone are possibles, and have been proposed. It is not a priori clear which is better – i.e. which has greater power. Let‘s assume that $p_1$, $p_2$ come from a bilateral $z$-test with non-centrality parameter 1 : > p1 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE )
> p2 <- pchisq( rnorm(1e4, 1, 1)**2, df=1, lower.tail=FALSE ) Let's have a look on the scatterplot with in red the points for which the null hypothesis is rejected. The power of Fisher’s product method is approximately > sum(p1*p2<exp(-9.49/2))/1e4
[1] 0.2245 The power of the method based on the sum of $p$-values is approximately > sum(p1+p2<sqrt(0.1))/1e4
[1] 0.1963 So Fisher’s method wins – at least in this case. | {
"source": [
"https://stats.stackexchange.com/questions/78596",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11013/"
]
} |
78,606 | I know 3 methods to do parameter estimation, ML, MAP and Bayes approach. And for MAP and Bayes approach, we need to pick priors for parameters, right? Say I have this model $p(x|\alpha,\beta)$, in which $\alpha,\beta$ are parameters, in order to do the estimation using MAP or Bayes, I read in the book that we'd better pick a conjugate prior $p(\alpha,\beta)$ , which is a joint probability of $\alpha,\beta$, right? I have 2 questions: Do we have other choices picking the prior other than this conjugate one? Can we pick priors for $\alpha$ and $\beta$ respectively like $p(\alpha)$ and $p(\beta)$, other than put them together in a joint one? | As stated in comment, the prior distribution represents prior beliefs about the distribution of the parameters. When prior beliefs are actually available, you can: convert them in terms of moments (e.g. mean and variance) to fit a common distribution to these moments (e.g. Gaussian if your parameter lies to the real line, Gamma if it lies to $R^+$ ). use your intuitive understanding of these beliefs to propose a given prior distribution and check if it really fits your purpose and that it is not to sensitive to arbitrary choices (performing a robustness or sensibility analysis) When no explicit prior beliefs are available, you can: derive (or simply use if already available, a great ressource is http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf ) a Jeffreys (e.g. uniform for a location parameter) or a reference prior (especially in case of multivariate parameters). sometimes such choices are impossible or quite difficult to derive and in this case, you can try to choose among one of the many "generic" weakly informative prior (e.g. uniform shrinkage distribution for scale parameters of hierarchical model or $g$ -prior for gaussian regression). Having said that, there is no restriction to use a joint or an independent prior ( $p(a,b)$ Vs $p(a) \cdot p(b)$ ). As a complement, I would say that in my humble opinion, there are three major things to take care when choosing a prior: take care that your posterior is integrable almost everywhere (or proper), which is always true if you use an integrable prior (see Does the Bayesian posterior need to be a proper distribution? for more details), limit the support of your prior only if you are highly confident on the support bounds (so avoid to do it). and last but not least, make sure (most of the time experimentally) that your choice of prior means what you want to express. In my opinion, this task is sometimes the most critical. Never forget, that when doing inference a prior means nothing by itself, you have to consider the posterior (which is the combination of prior and likelihood). | {
"source": [
"https://stats.stackexchange.com/questions/78606",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30540/"
]
} |
78,711 | Suppose that I have a variable like X with unknown distribution. In Mathematica, by using SmoothKernelDensity function we can have an estimated density function.This estimated density function can be used alongside with PDF function to calculate probability density function of a value like X in the form of PDF[density,X] assuming that "density" is the result of SmoothKernelDensity . It would be good if there is such feature in R.This is how it works in Mathematica http://reference.wolfram.com/mathematica/ref/SmoothKernelDistribution.html As an example (based on Mathematica functions): data = RandomVariate[NormalDistribution[], 100]; #generates 100 values from N(0,1)
density= SmoothKernelDistribution[data]; #estimated density
PDF[density, 2.345] returns 0.0588784 Here you can find more information about PDF: http://reference.wolfram.com/mathematica/ref/PDF.html I know that I can plot its density function using density(X) in R and by using ecdf(X) I can obtain its empirical cumulative distribution function.Is it possible to do the same thing in R based on what I described about Mathematica? Any help and idea is appreciated. | ?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function: x <- log(rgamma(150,5))
df <- approxfun(density(x))
plot(density(x))
xnew <- c(0.45,1.84,2.3)
points(xnew,df(xnew),col=2) By use of integrate starting from an appropriate distance below the minimum in the sample (a multiple - say 4 or 5, perhaps - of the bandwidth used in df would generally do for an appropriate distance), one can obtain a good approximation of the cdf corresponding to df . | {
"source": [
"https://stats.stackexchange.com/questions/78711",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26276/"
]
} |
78,828 | The coefficient of an explanatory variable in a multiple regression tells us the relationship of that explanatory variable with the dependent variable. All this, while 'controlling' for the other explanatory variables. How I have viewed it so far: While each coefficient is being calculated, the other variables are not taken into account, so I consider them to be ignored. So am I right when I think that the terms 'controlled' and 'ignored' can be used interchangeably? | Controlling for something and ignoring something are not the same thing. Let's consider a universe in which only 3 variables exist: $Y$, $X_1$, and $X_2$. We want to build a regression model that predicts $Y$, and we are especially interested in its relationship with $X_1$. There are two basic possibilities. We could assess the relationship between $X_1$ and $Y$ while controlling for $X_2$: $$
Y = \beta_0 + \beta_1X_1 + \beta_2X_2
$$
or, we could assess the relationship between $X_1$ and $Y$ while ignoring $X_2$: $$
Y = \beta_0 + \beta_1X_1
$$ Granted, these are very simple models, but they constitute different ways of looking at how the relationship between $X_1$ and $Y$ manifests. Often, the estimated $\hat\beta_1$s might be similar in both models, but they can be quite different. What is most important in determining how different they are is the relationship (or lack thereof) between $X_1$ and $X_2$. Consider this figure: In this scenario, $X_1$ is correlated with $X_2$. Since the plot is two-dimensional, it sort of ignores $X_2$ (perhaps ironically), so I have indicated the values of $X_2$ for each point with distinct symbols and colors (the pseudo-3D plot below provides another way to try to display the structure of the data). If we fit a regression model that ignored $X_2$, we would get the solid black regression line. If we fit a model that controlled for $X_2$, we would get a regression plane, which is again hard to plot, so I have plotted three slices through that plane where $X_2=1$, $X_2=2$, and $X_2=3$. Thus, we have the lines that show the relationship between $X_1$ and $Y$ that hold when we control for $X_2$. Of note, we see that controlling for $X_2$ does not yield a single line, but a set of lines. Another way to think about the distinction between ignoring and controlling for another variable, is to consider the distinction between a marginal distribution and a conditional distribution . Consider this figure: ( This is taken from my answer here: What is the intuition behind conditional Gaussian distributions? ) If you look at the normal curve drawn to the left of the main figure, that is the marginal distribution of $Y$. It is the distribution of $Y$ if we ignore its relationship with $X$. Within the main figure, there are two normal curves representing conditional distributions of $Y$ when $X_1 = 25$ and $X_1 = 45$. The conditional distributions control for the level of $X_1$, whereas the marginal distribution ignores it. | {
"source": [
"https://stats.stackexchange.com/questions/78828",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31152/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.