source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
131,281 | What would be the approach to use Dynamic Time Warping (DTW) to perform clustering of time series? I have read about DTW as a way to find similarity between two time series, while they could be shifted in time. Can I use this method as a similarity measure for clustering algorithm like k-means? | Do not use k-means for timeseries. DTW is not minimized by the mean; k-means may not converge and even if it converges it will not yield a very good result. The mean is an least-squares estimator on the coordinates. It minimizes variance, not arbitrary distances, and k-means is designed for minimizing variance, not arbitrary distances . Assume you have two time series. Two sine waves, of the same frequency, and a rather long sampling period; but they are offset by $\pi$. Since DTW does time warping, it can align them so they perfectly match, except for the beginning and end. DTW will assign a rather small distance to these two series.
However, if you compute the mean of the two series, it will be a flat 0 - they cancel out. The mean does not do dynamic time warping, and loses all the value that DTW got. On such data, k-means may fail to converge , and the results will be meaningless. K-means really should only be used with variance (= squared Euclidean), or some cases that are equivalent (like cosine, on L2 normalized data, where cosine similarity is the same as $2 -$ squared Euclidean distance) Instead, compute a distance matrix using DTW, then run hierarchical clustering such as single-link. In contrast to k-means, the series may even have different length. | {
"source": [
"https://stats.stackexchange.com/questions/131281",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/54659/"
]
} |
132,652 | I have a dataset and would like to figure out which distribution fits my data best. I used the fitdistr() function to estimate the necessary parameters to describe the assumed distribution (i.e. Weibull, Cauchy, Normal). Using those parameters I can conduct a Kolmogorov-Smirnov Test to estimate whether my sample data is from the same distribution as my assumed distribution. If the p-value is > 0.05 I can assume that the sample data is drawn from the same distribution. But the p-value doesn't provide any information about the godness of fit, isn't it? So in case the p-value of my sample data is > 0.05 for a normal distribution as well as a weibull distribution, how can I know which distribution fits my data better? This is basically the what I have done: > mydata
[1] 37.50 46.79 48.30 46.04 43.40 39.25 38.49 49.51 40.38 36.98 40.00
[12] 38.49 37.74 47.92 44.53 44.91 44.91 40.00 41.51 47.92 36.98 43.40
[23] 42.26 41.89 38.87 43.02 39.25 40.38 42.64 36.98 44.15 44.91 43.40
[34] 49.81 38.87 40.00 52.45 53.13 47.92 52.45 44.91 29.54 27.13 35.60
[45] 45.34 43.37 54.15 42.77 42.88 44.26 27.14 39.31 24.80 16.62 30.30
[56] 36.39 28.60 28.53 35.84 31.10 34.55 52.65 48.81 43.42 52.49 38.00
[67] 38.65 34.54 37.70 38.11 43.05 29.95 32.48 24.63 35.33 41.34
# estimate shape and scale to perform KS-test for weibull distribution
> fitdistr(mydata, "weibull")
shape scale
6.4632971 43.2474500
( 0.5800149) ( 0.8073102)
# KS-test for weibull distribution
> ks.test(mydata, "pweibull", scale=43.2474500, shape=6.4632971)
One-sample Kolmogorov-Smirnov test
data: mydata
D = 0.0686, p-value = 0.8669
alternative hypothesis: two-sided
# KS-test for normal distribution
> ks.test(mydata, "pnorm", mean=mean(mydata), sd=sd(mydata))
One-sample Kolmogorov-Smirnov test
data: mydata
D = 0.0912, p-value = 0.5522
alternative hypothesis: two-sided The p-values are 0.8669 for the Weibull distribution, and 0.5522 for the normal distribution. Thus I can assume that my data follows a Weibull as well as a normal distribution. But which distribution function describes my data better? Referring to elevendollar I found the following code, but don't know how to interpret the results: fits <- list(no = fitdistr(mydata, "normal"),
we = fitdistr(mydata, "weibull"))
sapply(fits, function(i) i$loglik)
no we
-259.6540 -257.9268 | First, here are some quick comments: The $p$ -values of a Kolmogorov-Smirnov-Test (KS-Test) with estimated parameters can be quite wrong because the p -value does not take the uncertainty of the estimation into account. So unfortunately, you can't just fit a distribution and then use the estimated parameters in a Kolmogorov-Smirnov-Test to test your sample. There is a normality test called Lilliefors test which is a modified version of the KS-Test that allows for estimated parameters. Your sample will never follow a specific distribution exactly. So even if your $p$ -values from the KS-Test would be valid and $>0.05$ , it would just mean that you can't rule out that your data follow this specific distribution. Another formulation would be that your sample is compatible with a certain distribution. But the answer to the question "Does my data follow the distribution xy exactly?" is always no. The goal here cannot be to determine with certainty what distribution your sample follows. The goal is what @whuber (in the comments) calls parsimonious approximate descriptions of the data. Having a specific parametric distribution can be useful as a model of the data (such as the model "earth is a sphere" can be useful). But let's do some exploration. I will use the excellent fitdistrplus package which offers some nice functions for distribution fitting. We will use the function descdist to gain some ideas about possible candidate distributions. library(fitdistrplus)
library(logspline)
x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00,
38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40,
42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40,
49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60,
45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30,
36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00,
38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34) Now let's use descdist : descdist(x, discrete = FALSE) The kurtosis and squared skewness of your sample are plotted as a blue point named "Observation". It seems that possible distributions include the Weibull, Lognormal and possibly the Gamma distribution. Let's fit a Weibull distribution and a normal distribution: fit.weibull <- fitdist(x, "weibull")
fit.norm <- fitdist(x, "norm") Now inspect the fit for the normal: plot(fit.norm) And for the Weibull fit: plot(fit.weibull) Both look good but judged by the QQ-Plot, the Weibull maybe looks a bit better, especially in the tails. Correspondingly, the AIC of the Weibull fit is lower compared with the normal fit: fit.weibull$aic
[1] 519.8537
fit.norm$aic
[1] 523.3079 Kolmogorov-Smirnov test simulation I will use @Aksakal's procedure explained here to simulate the KS-statistic under the null. n.sims <- 5e4
stats <- replicate(n.sims, {
r <- rweibull(n = length(x)
, shape= fit.weibull $estimate["shape"]
, scale = fit.weibull$ estimate["scale"]
)
estfit.weibull <- fitdist(r, "weibull") # added to account for the estimated parameters
as.numeric(ks.test(r
, "pweibull"
, shape= estfit.weibull $estimate["shape"]
, scale = estfit.weibull$ estimate["scale"])$statistic
)
}) The ECDF of the simulated KS-statistics looks as follows: plot(ecdf(stats), las = 1, main = "KS-test statistic simulation (CDF)", col = "darkorange", lwd = 1.7)
grid() Finally, our $p$ -value using the simulated null distribution of the KS-statistics is: fit <- logspline(stats)
1 - plogspline(ks.test(x
, "pweibull"
, shape= fit.weibull $estimate["shape"]
, scale = fit.weibull$ estimate["scale"])$statistic
, fit
)
[1] 0.4889511 This confirms our graphical conclusion that the sample is compatible with a Weibull distribution. As explained here , we can use bootstrapping to add pointwise confidence intervals to the estimated Weibull PDF or CDF: xs <- seq(10, 65, len=500)
true.weibull <- rweibull(1e6, shape= fit.weibull $estimate["shape"]
, scale = fit.weibull$ estimate["scale"])
boot.pdf <- sapply(1:1000, function(i) {
xi <- sample(x, size=length(x), replace=TRUE)
MLE.est <- suppressWarnings(fitdist(xi, distr="weibull"))
dweibull(xs, shape=MLE.est $estimate["shape"], scale = MLE.est$ estimate["scale"])
}
)
boot.cdf <- sapply(1:1000, function(i) {
xi <- sample(x, size=length(x), replace=TRUE)
MLE.est <- suppressWarnings(fitdist(xi, distr="weibull"))
pweibull(xs, shape= MLE.est $estimate["shape"], scale = MLE.est$ estimate["scale"])
}
)
#-----------------------------------------------------------------------------
# Plot PDF
#-----------------------------------------------------------------------------
par(bg="white", las=1, cex=1.2)
plot(xs, boot.pdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf),
xlab="x", ylab="Probability density")
for(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1))
# Add pointwise confidence bands
quants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975))
min.point <- apply(boot.pdf, 1, min, na.rm=TRUE)
max.point <- apply(boot.pdf, 1, max, na.rm=TRUE)
lines(xs, quants[1, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[3, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[2, ], col="darkred", lwd=2) #-----------------------------------------------------------------------------
# Plot CDF
#-----------------------------------------------------------------------------
par(bg="white", las=1, cex=1.2)
plot(xs, boot.cdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf),
xlab="x", ylab="F(x)")
for(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1))
# Add pointwise confidence bands
quants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975))
min.point <- apply(boot.cdf, 1, min, na.rm=TRUE)
max.point <- apply(boot.cdf, 1, max, na.rm=TRUE)
lines(xs, quants[1, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[3, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[2, ], col="darkred", lwd=2)
#lines(xs, min.point, col="purple")
#lines(xs, max.point, col="purple") Automatic distribution fitting with GAMLSS The gamlss package for R offers the ability to try many different distributions and select the "best" according to the GAIC (the generalized Akaike information criterion). The main function is fitDist . An important option in this function is the type of the distributions that are tried. For example, setting type = "realline" will try all implemented distributions defined on the whole real line whereas type = "realsplus" will only try distributions defined on the real positive line. Another important option is the parameter $k$ , which is the penalty for the GAIC. In the example below, I set the parameter $k = 2$ which means that the "best" distribution is selected according to the classic AIC. You can set $k$ to anything you like, such as $\log(n)$ for the BIC. library(gamlss)
library(gamlss.dist)
library(gamlss.add)
x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00,
38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40,
42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40,
49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60,
45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30,
36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00,
38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34)
fit <- fitDist(x, k = 2, type = "realplus", trace = FALSE, try.gamlss = TRUE)
summary(fit)
*******************************************************************
Family: c("WEI2", "Weibull type 2")
Call: gamlssML(formula = y, family = DIST[i], data = sys.parent())
Fitting method: "nlminb"
Coefficient(s):
Estimate Std. Error t value Pr(>|t|)
eta.mu -24.3468041 2.2141197 -10.9962 < 2.22e-16 ***
eta.sigma 1.8661380 0.0892799 20.9021 < 2.22e-16 *** According to the AIC, the Weibull distribution (more specifically WEI2 , a special parametrization of it) fits the data best. The exact parameterization of the distribution WEI2 is detailed in this document on page 279. Let's inspect the fit by looking at the residuals in a worm plot (basically a de-trended Q-Q-plot): We expect the residuals to be close to the middle horizontal line and 95% of them to lie between the upper and lower dotted curves, which act as 95% pointwise confidence intervals. In this case, the worm plot looks fine to me indicating that the Weibull distribution is an adequate fit. | {
"source": [
"https://stats.stackexchange.com/questions/132652",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/66142/"
]
} |
132,777 | Searched high and low and have not been able to find out what AUC, as in related to prediction, stands for or means. | Abbreviations AUC = Area Under the Curve. AUROC = Area Under the Receiver Operating Characteristic curve . AUC is used most of the time to mean AUROC, which is a bad practice since as Marc Claesen pointed out AUC is ambiguous (could be any curve) while AUROC is not. Interpreting the AUROC The AUROC has several equivalent interpretations : The expectation that a uniformly drawn random positive is ranked before a uniformly drawn random negative. The expected proportion of positives ranked before a uniformly drawn random negative. The expected true positive rate if the ranking is split just before a uniformly drawn random negative. The expected proportion of negatives ranked after a uniformly drawn random positive. The expected false positive rate if the ranking is split just after a uniformly drawn random positive. Going further: How to derive the probabilistic interpretation of the AUROC? Computing the AUROC Assume we have a probabilistic, binary classifier such as logistic regression. Before presenting the ROC curve (= Receiver Operating Characteristic curve), the concept of confusion matrix must be understood. When we make a binary prediction, there can be 4 types of outcomes: We predict 0 while the true class is actually 0: this is called a True Negative , i.e. we correctly predict that the class is negative (0). For example, an antivirus did not detect a harmless file as a virus . We predict 0 while the true class is actually 1: this is called a False Negative , i.e. we incorrectly predict that the class is negative (0). For example, an antivirus failed to detect a virus. We predict 1 while the true class is actually 0: this is called a False Positive , i.e. we incorrectly predict that the class is positive (1). For example, an antivirus considered a harmless file to be a virus. We predict 1 while the true class is actually 1: this is called a True Positive , i.e. we correctly predict that the class is positive (1). For example, an antivirus rightfully detected a virus. To get the confusion matrix, we go over all the predictions made by the model, and count how many times each of those 4 types of outcomes occur: In this example of a confusion matrix, among the 50 data points that are classified, 45 are correctly classified and the 5 are misclassified. Since to compare two different models it is often more convenient to have a single metric rather than several ones, we compute two metrics from the confusion matrix, which we will later combine into one: True positive rate ( TPR ), aka. sensitivity, hit rate , and recall , which is defined as $ \frac{TP}{TP+FN}$ . Intuitively this metric corresponds to the proportion of positive data points that are correctly considered as positive, with respect to all positive data points. In other words, the higher TPR, the fewer positive data points we will miss. False positive rate ( FPR ), aka. fall-out , which is defined as $ \frac{FP}{FP+TN}$ . Intuitively this metric corresponds to the proportion of negative data points that are mistakenly considered as positive, with respect to all negative data points. In other words, the higher FPR, the more negative data points will be missclassified. To combine the FPR and the TPR into one single metric, we first compute the two former metrics with many different threshold (for example $0.00; 0.01, 0.02, \dots, 1.00$ ) for the logistic regression, then plot them on a single graph, with the FPR values on the abscissa and the TPR values on the ordinate. The resulting curve is called ROC curve, and the metric we consider is the AUC of this curve, which we call AUROC. The following figure shows the AUROC graphically: In this figure, the blue area corresponds to the Area Under the curve of the Receiver Operating Characteristic (AUROC). The dashed line in the diagonal we present the ROC curve of a random predictor: it has an AUROC of 0.5. The random predictor is commonly used as a baseline to see whether the model is useful. If you want to get some first-hand experience: Python: http://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html MATLAB: http://www.mathworks.com/help/stats/perfcurve.html | {
"source": [
"https://stats.stackexchange.com/questions/132777",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62008/"
]
} |
133,155 | I have 2 time-series (both smooth) that I would like to cross-correlate to see how correlated they are. I intend to use the Pearson correlation coefficient. Is this appropriate? My second question is that I can choose to sample the 2 time-series as well as I like. i.e. I can choose how many data points I will us. Will this affect the correlation coefficient that is output? Do I need to account for this? For illustration purposes option(i)
[1, 4, 7, 10] & [6, 9, 6, 9, 6]
option(ii)
[1,2,3,4,5,6,7,8,9,10] & [6,7,8,9,8,7,6,7,8,9,8,7,6] | Pearson correlation is used to look at correlation between series ... but being time series the correlation is looked at across different lags -- the cross-correlation function . The cross-correlation is impacted by dependence within-series, so in many cases the within-series dependence should be removed first. So to use this correlation, rather than smoothing the series, it's actually more common (because it's meaningful) to look at dependence between residuals - the rough part that's left over after a suitable model is found for the variables. You probably want to begin with some basic resources on time series models before delving into trying to figure out whether a Pearson correlation across (presumably) nonstationary, smoothed series is interpretable. In particular, you'll probably want to look into the phenomenon here . [In time series this is sometimes called spurious correlation , though the Wikipedia article on spurious correlation takes a narrow view on the use of the term in a way that would seem to exclude this use of the term. You'll probably find more on the issues discussed here by searching spurious regression instead.] [Edit -- the Wikipedia landscape keeps changing; the above para. should probably be revised to reflect what's there now.] e.g. see some discussions http://www.math.ku.dk/~sjo/papers/LisbonPaper.pdf (the opening quote of Yule, in a paper presented in 1925 but published the following year, summarizes the problem quite well) Christos Agiakloglou and Apostolos Tsimpanos, Spurious Correlations for Stationary AR(1) Processes http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.611.5055&rep=rep1&type=pdf (this shows that you can even get the problem between stationary series; hence the tendency to prewhiten) The classic reference of Yule, (1926) [1] mentioned above. You may also find the discussion here useful, as well as the discussion here -- Using Pearson correlation in a meaningful way between time series is difficult and sometimes surprisingly subtle. I looked up spurious correlation, but I don't care if my A series is the cause of my B series or vice versa. I only want to know if you can learn something about series A by looking at what series B is doing (or vice versa). In other words - do they have an correlation. Take note of my previous comment about the narrow use of the term spurious correlation in the Wikipedia article. The point about spurious correlation is that series can appear correlated, but the correlation itself is not meaningful. Consider two people tossing two distinct coins counting number of heads so far minus number of tails so far as the value of their series. (So if person 1 tosses $\text{HTHH...}$ they have 3-1 = 2 for the value at the 4th time step, and their series goes $1, 0, 1, 2,...$ .) Obviously there's no connection whatever between the two series. Clearly neither can tell you the first thing about the other! But look at the sort of correlations you get between pairs of coins: If I didn't tell you what those were, and you took any pair of those series by themselves, those would be impressive correlations would they not? But they're all meaningless . Utterly spurious. None of the three pairs are really any more positively or negatively related to each other than any of the others -- its just cumulated noise . The spuriousness isn't just about prediction, the whole notion of of considering association between series without taking account of the within-series dependence is misplaced. All you have here is within-series dependence. There's no actual cross-series relation whatever. Once you deal properly with the issue that makes these series auto-dependent - they're all integrated ( Bernoulli random walks ), so you need to difference them - the "apparent" association disappears (the largest absolute cross-series correlation of the three is 0.048). What that tells you is the truth -- the apparent association is a mere illusion caused by the dependence within-series. Your question asked "how to use Pearson correlation correctly with time series" -- so please understand: if there's within-series dependence and you don't deal with it first, you won't be using it correctly. Further, smoothing won't reduce the problem of serial dependence; quite the opposite -- it makes it even worse! Here are the correlations after smoothing (default loess smooth - of series vs index - performed in R): coin1 coin2
coin2 0.9696378
coin3 -0.8829326 -0.7733559 They all got further from 0. They're all still nothing but meaningless noise , though now it's smoothed, cumulated noise. (By smoothing, we reduce the variability in the series we put into the correlation calculation, so that may be why the correlation goes up.) [1]: Yule, G.U. (1926) "Why do we Sometimes get Nonsense-Correlations between Time-Series?" J.Roy.Stat.Soc. , 89 , 1 , pp. 1-63 | {
"source": [
"https://stats.stackexchange.com/questions/133155",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/30724/"
]
} |
133,369 | Conceptually I grasp the meaning of the phrase "the total area underneath a PDF is 1". It should mean that the chances of the outcome being in the total interval of possibilities is 100%. But I cannot really understand it from a "geometric" point of view. If, for instance, in a PDF the x-axis represents length, would the total area underneath the curve not become larger if x was measured in mm rather than km? I always try to picture how the area underneath the curve would look if the function were flattened to a straight line. Would the height (position on the y-axis) of that line be the same for any PDF,or would it have a value contingent on the interval on the x-axis for which the function is defined? | It might help you to realise that the vertical axis is measured as a probability density . So if the horizontal axis is measured in km, then the vertical axis is measured as a probability density "per km". Suppose we draw a rectangular element on such a grid, which is 5 "km" wide and 0.1 "per km" high (which you might prefer to write as "km$^{-1}$"). The area of this rectangle is 5 km x 0.1 km$^{-1}$ = 0.5. The units cancel out and we are left with just a probability of one half. If you changed the horizontal units to "metres", you'd have to change the vertical units to "per metre". The rectangle would now be 5000 metres wide, and would have a density (height) of 0.0001 per metre. You're still left with a probability of one half. You might get perturbed by how weird these two graphs will look on the page compared to each other (doesn't one have to be much wider and shorter than the other?), but when you're physically drawing the plots you can use whatever scale you like. Look below to see how little weirdness need be involved. You might find it helpful to consider histograms before you move on to probability density curves. In many ways they are analogous. A histogram's vertical axis is frequency density [per $x$ unit] and areas represent frequencies, again because horizontal and vertical units cancel out upon multiplication. The PDF curve is a sort of continuous version of a histogram, with total frequency equal to one. An even closer analogy is a relative frequency histogram - we say such a histogram has been "normalized", so that area elements now represent proportions of your original data set rather than raw frequencies, and the total area of all the bars is one. The heights are now relative frequency densities [per $x$ unit] . If a relative frequency histogram has a bar that runs along $x$ values from 20 km to 25 km (so the width of the bar is 5 km) and has a relative frequency density of 0.1 per km, then that bar contains a 0.5 proportion of the data. This corresponds exactly to the idea that a randomly chosen item from your data set has a 50% probability of lying in that bar. The previous argument about the effect of changes in units still applies: compare the proportions of data lying in the 20 km to 25 km bar to that in the 20,000 metres to 25,000 metres bar for these two plots. You might also confirm arithmetically that the areas of all bars sum to one in both cases. What might I have meant by my claim that the PDF is a "sort of continuous version of a histogram"? Let's take a small strip under a probability density curve, along $x$ values in the interval $[x, x + \delta x]$, so the strip is $\delta x$ wide, and the height of the curve is an approximately constant $f(x)$. We can draw a bar of that height, whose area $f(x) \, \delta x$ represents the approximate probability of lying in that strip. How might we find the area under the curve between $x=a$ and $x=b$? We could subdivide that interval into little strips and take the sum of the areas of the bars, $\sum f(x) \, \delta x$, which would correspond to the approximate probability of lying in the interval $[a,b]$. We see that the curve and the bars do not precisely align, so there is an error in our approximation. By making $\delta x$ smaller and smaller for each bar, we fill the interval with more and narrower bars, whose $\sum f(x) \, \delta x$ provides a better estimate of the area. To calculate the area precisely, rather than assuming $f(x)$ was constant across each strip, we evaluate the integral $\int_a^b f(x) dx$, and this corresponds to the true probability of lying in the interval $[a,b]$. Integrating over the whole curve gives a total area (i.e. total probability) one, for the same reason that summing up the areas of all the bars of a relative frequency histogram gives a total area (i.e. total proportion) of one. Integration is itself a sort of continuous version of taking a sum. R code for plots require(ggplot2)
require(scales)
require(gridExtra)
# Code for the PDF plots with bars underneath could be easily readapted
# Relative frequency histograms
x.df <- data.frame(km=c(rep(12.5, 1), rep(17.5, 2), rep(22.5, 5), rep(27.5, 2)))
x.df$metres <- x.df$km * 1000
km.plot <- ggplot(x.df, aes(x=km, y=..density..)) +
stat_bin(origin=10, binwidth=5, fill="steelblue", colour="black") +
xlab("Distance in km") + ylab("Relative frequency density per km") +
scale_y_continuous(minor_breaks = seq(0, 0.1, by=0.005))
metres.plot <- ggplot(x.df, aes(x=metres, y=..density..)) +
stat_bin(origin=10000, binwidth=5000, fill="steelblue", colour="black") +
xlab("Distance in metres") + ylab("Relative frequency density per metre") +
scale_x_continuous(labels = comma) +
scale_y_continuous(minor_breaks = seq(0, 0.0001, by=0.000005), labels=comma)
grid.arrange(km.plot, metres.plot, ncol=2)
x11()
# Probability density functions
x.df <- data.frame(x=seq(0, 1, by=0.001))
cutoffs <- seq(0.2, 0.5, by=0.1) # for bars
barHeights <- c(0, dbeta(cutoffs[1:(length(cutoffs)-1)], 2, 2), 0) # uses left of bar
x.df$pdf <- dbeta(x.df$x, 2, 2)
x.df$bar <- findInterval(x.df$x, cutoffs) + 1 # start at 1, first plotted bar is 2
x.df$barHeight <- barHeights[x.df$bar]
x.df$lastBar <- ifelse(x.df$bar == max(x.df$bar)-1, 1, 0) # last plotted bar only
x.df$lastBarHeight <- ifelse(x.df$lastBar == 1, x.df$barHeight, 0)
x.df$integral <- ifelse(x.df$bar %in% 2:(max(x.df$bar)-1), 1, 0) # all plotted bars
x.df$integralHeight <- ifelse(x.df$integral == 1, x.df$pdf, 0)
cutoffsNarrow <- seq(0.2, 0.5, by=0.025) # for the narrow bars
barHeightsNarrow <- c(0, dbeta(cutoffsNarrow[1:(length(cutoffsNarrow)-1)], 2, 2), 0) # uses left of bar
x.df$barNarrow <- findInterval(x.df$x, cutoffsNarrow) + 1 # start at 1, first plotted bar is 2
x.df$barHeightNarrow <- barHeightsNarrow[x.df$barNarrow]
pdf.plot <- ggplot(x.df, aes(x=x, y=pdf)) +
geom_area(fill="lightsteelblue", colour="black", size=.8) +
ylab("probability density") +
theme(panel.grid = element_blank(),
axis.text.x = element_text(colour="black", size=16))
pdf.lastBar.plot <- pdf.plot +
scale_x_continuous(breaks=tail(cutoffs, 2), labels=expression(x, x+delta*x)) +
geom_area(aes(x=x, y=lastBarHeight, group=lastBar), fill="steelblue", colour="black", size=.8) +
annotate("text", x=0.73, y=0.22, size=6, label=paste("P(paste(x<=X)<=x+delta*x)%~~%f(x)*delta*x"), parse=TRUE)
pdf.bars.plot <- pdf.plot +
scale_x_continuous(breaks=cutoffs[c(1, length(cutoffs))], labels=c("a", "b")) +
geom_area(aes(x=x, y=barHeight, group=bar), fill="steelblue", colour="black", size=.8) +
annotate("text", x=0.73, y=0.22, size=6, label=paste("P(paste(a<=X)<=b)%~~%sum(f(x)*delta*x)"), parse=TRUE)
pdf.barsNarrow.plot <- pdf.plot +
scale_x_continuous(breaks=cutoffsNarrow[c(1, length(cutoffsNarrow))], labels=c("a", "b")) +
geom_area(aes(x=x, y=barHeightNarrow, group=barNarrow), fill="steelblue", colour="black", size=.8) +
annotate("text", x=0.73, y=0.22, size=6, label=paste("P(paste(a<=X)<=b)%~~%sum(f(x)*delta*x)"), parse=TRUE)
pdf.integral.plot <- pdf.plot +
scale_x_continuous(breaks=cutoffs[c(1, length(cutoffs))], labels=c("a", "b")) +
geom_area(aes(x=x, y=integralHeight, group=integral), fill="steelblue", colour="black", size=.8) +
annotate("text", x=0.73, y=0.22, size=6, label=paste("P(paste(a<=X)<=b)==integral(f(x)*dx,a,b)"), parse=TRUE)
grid.arrange(pdf.lastBar.plot, pdf.bars.plot, pdf.barsNarrow.plot, pdf.integral.plot, ncol=2) | {
"source": [
"https://stats.stackexchange.com/questions/133369",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/32504/"
]
} |
133,376 | Say I have a sample and the bootstrap sample from this sample for a stastitic $\chi$ (e.g. the mean). As we all know, this bootstrap sample estimates the sampling distribution of the estimator of the statistic. Now, is the mean of this bootstrap sample a better estimate of the population statistic than the statistic of the original sample ? Under what conditions would that be the case? | Let's generalize, so as to focus on the crux of the matter. I will spell out the tiniest details so as to leave no doubts. The analysis requires only the following: The arithmetic mean of a set of numbers $z_1, \ldots, z_m$ is defined to be $$\frac{1}{m}\left(z_1 + \cdots + z_m\right).$$ Expectation is a linear operator. That is, when $Z_i, i=1,\ldots,m$ are random variables and $\alpha_i$ are numbers, then the expectation of a linear combination is the linear combination of the expectations, $$\mathbb{E}\left(\alpha_1 Z_1 + \cdots + \alpha_m Z_m\right) = \alpha_1 \mathbb{E}(Z_1) + \cdots + \alpha_m\mathbb{E}(Z_m).$$ Let $B$ be a sample $(B_1, \ldots, B_k)$ obtained from a dataset $x = (x_1, \ldots, x_n)$ by taking $k$ elements uniformly from $x$ with replacement. Let $m(B)$ be the arithmetic mean of $B$. This is a random variable. Then $$\mathbb{E}(m(B)) = \mathbb{E}\left(\frac{1}{k}\left(B_1+\cdots+B_k\right)\right) = \frac{1}{k}\left(\mathbb{E}(B_1) + \cdots + \mathbb{E}(B_k)\right)$$ follows by linearity of expectation. Since the elements of $B$ are all obtained in the same fashion, they all have the same expectation, $b$ say: $$\mathbb{E}(B_1) = \cdots = \mathbb{E}(B_k) = b.$$ This simplifies the foregoing to $$\mathbb{E}(m(B)) = \frac{1}{k}\left(b + b + \cdots + b\right) = \frac{1}{k}\left(k b\right) = b.$$ By definition, the expectation is the probability-weighted sum of values. Since each value of $X$ is assumed to have an equal chance of $1/n$ of being selected, $$\mathbb{E}(m(B)) = b = \mathbb{E}(B_1) = \frac{1}{n}x_1 + \cdots + \frac{1}{n}x_n = \frac{1}{n}\left(x_1 + \cdots + x_n\right) = \bar x,$$ the arithmetic mean of the data. To answer the question, if one uses the data mean $\bar x$ to estimate the population mean, then the bootstrap mean (which is the case $k=n$) also equals $\bar x$, and therefore is identical as an estimator of the population mean. For statistics that are not linear functions of the data, the same result does not necessarily hold. However, it would be wrong simply to substitute the bootstrap mean for the statistic's value on the data: that is not how bootstrapping works. Instead, by comparing the bootstrap mean to the data statistic we obtain information about the bias of the statistic. This can be used to adjust the original statistic to remove the bias. As such, the bias-corrected estimate thereby becomes an algebraic combination of the original statistic and the bootstrap mean. For more information, look up "BCa" (bias-corrected and accelerated bootstrap) and "ABC". Wikipedia provides some references. | {
"source": [
"https://stats.stackexchange.com/questions/133376",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/2798/"
]
} |
133,389 | While these two ubiquitous terms are often used synonymously, there sometimes seems to be a distinction. Is there indeed a difference, or are they exactly synonymous? | Errors pertain to the true data generating process (DGP), whereas residuals are what is left over after having estimated your model. In truth, assumptions like normality, homoscedasticity, and independence apply to the errors of the DGP, not your model's residuals. (For example, having fit $p+1$ parameters in your model, only $N-(p+1)$ residuals can be independent.) However, we only have access to the residuals, so that's what we work with. | {
"source": [
"https://stats.stackexchange.com/questions/133389",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60577/"
]
} |
133,635 | I was performing a Poisson regression in SAS and found that the Pearson chi-squared value divided by the degrees of freedom was around 5, indicating significant overdispersion. So, I fit a negative binomial model with proc genmod and found the Pearson chi-squared value divided by the degrees of freedom is 0.80. Is this now considered to be underdispersed? If so, how does one go about handling this? I have read a lot about overdispersion and believe I know how to handle this but information on how to handle or determine if there is underdispersion is scant. Can anyone assist? | For a Poisson distribution with mean $\mu$ the variance is also $\mu$. Within the framework of generalized linear models this implies that the variance function is
$$V(\mu) = \mu$$
for the Poisson model. This model assumption can be wrong for many different reasons. Overdispersed count data with a variance larger than what the Poisson distribution dictates is, for instance, often encountered. Deviations from the variance assumption can in a regression context take several forms. The simplest one is that the variance function equals
$$V(\mu) = \psi \mu$$
with $\psi > 0$ a dispersion parameter . This is the quasi-Poisson model. It will give the same fitted regression model, but the statistical inference ($p$-values and confidence intervals) is adjusted for over- or underdispersion using an estimated dispersion parameter. The functional form of the variance function can also be wrong. It could be a second degree polynomial
$$V(\mu) = a\mu^2 + b \mu + c,$$
say. Examples include the binomial, the
negative binomial and the gamma model. Choosing any of these models as an alternative to the Poisson model will affect the fitted regression model as well as the subsequent statistical inference. For the negative binomial distribution with shape parameter $\lambda > 0$ the variance function is
$$V(\mu) = \mu\left( 1 + \frac{\mu}{\lambda}\right).$$
We can see from this that if $\lambda \to \infty$ we get the variance function for the Poisson distribution. To determine if the variance function for the Poisson model is appropriate for the data, we can estimate the dispersion parameter as the OP suggests and check if it is approximately 1 (perhaps using a formal test). Such a test does not suggest a specific alternative, but it is most clearly understood within the quasi-Poisson model. To test if the functional form of the variance function is appropriate, w e could construct a likelihood ratio test of the Poisson model ($\lambda = \infty$) against the negative binomial model ($\lambda < \infty$). Note that it has a nonstandard distribution under the null hypothesis. Or we could use AIC-based methods in general for comparing non-nested models. Regression-based tests for overdispersion in the Poisson model explores a class of tests for general variance functions. However, I would recommend to first of all study residual plots, e.g. a plot of the Pearson or deviance residuals (or their squared value) against the fitted values. If the functional form of the variance is wrong, you will see this as a funnel shape (or a trend for the squared residuals) in the residual plot. If the functional form is correct, that is, no funnel or trend, there could still be over- or underdispersion, but this can be accounted for by estimating the dispersion parameter. The benefit of the residual plot is that it suggests more clearly than a test what is wrong with the variance function if anything. In the OP's concrete case it is not possible to say if 0.8 indicates underdispersion from the given information. Instead of focusing on the 5 and 0.8 estimates, I suggest to first of all investigate the fit of the variance functions of the Poisson model and the negative binomial model. Once the most appropriate functional form of the variance function is determined, a dispersion parameter can be included, if needed, in either model to adjust the statistical inference for any additional over- or underdispersion. How to do that easily in SAS, say, is unfortunately not something I can help with. | {
"source": [
"https://stats.stackexchange.com/questions/133635",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/7962/"
]
} |
133,656 | K-means is a widely used method in cluster analysis. In my understanding, this method does NOT require ANY assumptions, i.e., give me a dataset and a pre-specified number of clusters, k, and I just apply this algorithm which minimizes the sum of squared errors (SSE), the within cluster squared error. So k-means is essentially an optimization problem. I read some material about the drawbacks of k-means. Most of them say that: k-means assumes the variance of the distribution of each attribute (variable) is spherical; all variables have the same variance; the prior probability for all k clusters is the same, i.e., each cluster has roughly equal number of observations; If any one of these 3 assumptions are violated, then k-means will fail. I could not understand the logic behind this statement. I think the k-means method makes essentially no assumptions, it just minimizes the SSE, so I cannot see the link between minimizing the SSE and those 3 "assumptions". | What a great question- it's a chance to show how one would inspect the drawbacks and assumptions of any statistical method. Namely: make up some data and try the algorithm on it! We'll consider two of your assumptions, and we'll see what happens to the k-means algorithm when those assumptions are broken. We'll stick to 2-dimensional data since it's easy to visualize. (Thanks to the curse of dimensionality , adding additional dimensions is likely to make these problems more severe, not less). We'll work with the statistical programming language R: you can find the full code here (and the post in blog form here ). Diversion: Anscombe's Quartet First, an analogy. Imagine someone argued the following: I read some material about the drawbacks of linear regression- that it expects a linear trend, that the residuals are normally distributed, and that there are no outliers. But all linear regression is doing is minimizing the sum of squared errors (SSE) from the predicted line. That's an optimization problem that can be solved no matter what the shape of the curve or the distribution of the residuals is. Thus, linear regression requires no assumptions to work. Well, yes, linear regression works by minimizing the sum of squared residuals. But that by itself is not the goal of a regression: what we're trying to do is draw a line that serves as a reliable, unbiased predictor of y based on x . The Gauss-Markov theorem tells us that minimizing the SSE accomplishes that goal- but that theorem rests on some very specific assumptions. If those assumptions are broken, you can still minimize the SSE, but it might not do anything. Imagine saying "You drive a car by pushing the pedal: driving is essentially a 'pedal-pushing process.' The pedal can be pushed no matter how much gas in the tank. Therefore, even if the tank is empty, you can still push the pedal and drive the car." But talk is cheap. Let's look at the cold, hard, data. Or actually, made-up data. This is in fact my favorite made-up data: Anscombe's Quartet . Created in 1973 by statistician Francis Anscombe, this delightful concoction illustrates the folly of trusting statistical methods blindly. Each of the datasets has the same linear regression slope, intercept, p-value and $R^2$- and yet at a glance we can see that only one of them, I , is appropriate for linear regression. In II it suggests the wrong shape, in III it is skewed by a single outlier- and in IV there is clearly no trend at all! One could say "Linear regression is still working in those cases, because it's minimizing the sum of squares of the residuals." But what a Pyrrhic victory ! Linear regression will always draw a line, but if it's a meaningless line, who cares? So now we see that just because an optimization can be performed doesn't mean we're accomplishing our goal. And we see that making up data, and visualizing it, is a good way to inspect the assumptions of a model. Hang on to that intuition, we're going to need it in a minute. Broken Assumption: Non-Spherical Data You argue that the k-means algorithm will work fine on non-spherical clusters. Non-spherical clusters like... these? Maybe this isn't what you were expecting- but it's a perfectly reasonable way to construct clusters. Looking at this image, we humans immediately recognize two natural groups of points- there's no mistaking them. So let's see how k-means does: assignments are shown in color, imputed centers are shown as X's. Well, that 's not right. K-means was trying to fit a square peg in a round hole - trying to find nice centers with neat spheres around them- and it failed. Yes, it's still minimizing the within-cluster sum of squares- but just like in Anscombe's Quartet above, it's a Pyrrhic victory! You might say "That's not a fair example... no clustering method could correctly find clusters that are that weird." Not true! Try single linkage hierachical clustering : Nailed it! This is because single-linkage hierarchical clustering makes the right assumptions for this dataset. (There's a whole other class of situations where it fails). You might say "That's a single, extreme, pathological case." But it's not! For instance, you can make the outer group a semi-circle instead of a circle, and you'll see k-means still does terribly (and hierarchical clustering still does well). I could come up with other problematic situations easily, and that's just in two dimensions. When you're clustering 16-dimensional data, there's all kinds of pathologies that could arise. Lastly, I should note that k-means is still salvagable! If you start by transforming your data into polar coordinates , the clustering now works: That's why understanding the assumptions underlying a method is essential: it doesn't just tell you when a method has drawbacks, it tells you how to fix them. Broken Assumption: Unevenly Sized Clusters What if the clusters have an uneven number of points- does that also break k-means clustering? Well, consider this set of clusters, of sizes 20, 100, 500. I've generated each from a multivariate Gaussian: This looks like k-means could probably find those clusters, right? Everything seems to be generated into neat and tidy groups. So let's try k-means: Ouch. What happened here is a bit subtler. In its quest to minimize the within-cluster sum of squares, the k-means algorithm gives more "weight" to larger clusters. In practice, that means it's happy to let that small cluster end up far away from any center, while it uses those centers to "split up" a much larger cluster. If you play with these examples a little ( R code here! ), you'll see that you can construct far more scenarios where k-means gets it embarrassingly wrong. Conclusion: No Free Lunch There's a charming construction in mathematical folklore, formalized by Wolpert and Macready , called the "No Free Lunch Theorem." It's probably my favorite theorem in machine learning philosophy, and I relish any chance to bring it up (did I mention I love this question?) The basic idea is stated (non-rigorously) as this: "When averaged across all possible situations, every algorithm performs equally well." Sound counterintuitive? Consider that for every case where an algorithm works, I could construct a situation where it fails terribly. Linear regression assumes your data falls along a line- but what if it follows a sinusoidal wave? A t-test assumes each sample comes from a normal distribution: what if you throw in an outlier? Any gradient ascent algorithm can get trapped in local maxima, and any supervised classification can be tricked into overfitting. What does this mean? It means that assumptions are where your power comes from! When Netflix recommends movies to you, it's assuming that if you like one movie, you'll like similar ones (and vice versa). Imagine a world where that wasn't true, and your tastes are perfectly random- scattered haphazardly across genres, actors and directors. Their recommendation algorithm would fail terribly. Would it make sense to say "Well, it's still minimizing some expected squared error, so the algorithm is still working"? You can't make a recommendation algorithm without making some assumptions about users' tastes- just like you can't make a clustering algorithm without making some assumptions about the nature of those clusters. So don't just accept these drawbacks. Know them, so they can inform your choice of algorithms. Understand them, so you can tweak your algorithm and transform your data to solve them. And love them, because if your model could never be wrong, that means it will never be right. | {
"source": [
"https://stats.stackexchange.com/questions/133656",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/66461/"
]
} |
134,229 | I'm working through a statistics textbook while learning R and I've run into a stumbling block on the following example: After looking at ?quantile I attempted to recreate this in R with the following: > nuclear <- c(7, 20, 16, 6, 58, 9, 20, 50, 23, 33, 8, 10, 15, 16, 104)
> quantile(nuclear)
0% 25% 50% 75% 100%
6.0 9.5 16.0 28.0 104.0 Given that the text and R have different results, I'm gathering that R is utilizing the median in the calculation of the first and third quartiles. Question: Should I include the median in calculating the first and third quartiles? More specifically, does the textbook or R have this correct? If the textbook has this correct, is there a way to properly achieve this in R? Thanks in advance. | Your textbook is confused. Very few people or software define quartiles this way. (It tends to make the first quartile too small and the third quartile too large.) The quantile function in R implements nine different ways to compute quantiles! To see which of them, if any, correspond to this method, let's start by implementing it. From the description we can write an algorithm, first mathematically and then in R : Order the data $x_1 \le x_2 \le \cdots \le x_n$. For any set of data the median is its middle value when there are an odd number of values; otherwise it is the average of the two middle values when there are an even number of values. R 's median function calculates this. The index of the middle value is $m = (n+1)/2$. When it is not an integer, $(x_l + x_u)/2$ is the median, where $l$ and $u$ are $m$ rounded down and up. Otherwise when $m$ is an integer, $x_m$ is the median. In that case take $l=m-1$ and $u=m+1$. In either case $l$ is the index of the data value immediately to the left of the median and $u$ is the index of the data value immediately to the right of the median. The "first quartile" is the median of all $x_i$ for which $i \le l$. The "third quartile" is the median of $(x_i)$ for which $i \ge u$. Here is an implementation. It can help you do your exercises in this textbook. quart <- function(x) {
x <- sort(x)
n <- length(x)
m <- (n+1)/2
if (floor(m) != m) {
l <- m-1/2; u <- m+1/2
} else {
l <- m-1; u <- m+1
}
c(Q1=median(x[1:l]), Q3=median(x[u:n]))
} For instance, the output of quart(c(6,7,8,9,10,15,16,16,20,20,23,33,50,58,104)) agrees with the text: Q1 Q3
9 33 Let's compute quartiles for some small datasets using all ten methods: the nine in R and the textbook's: y <- matrix(NA, 2, 10)
rownames(y) <- c("Q1", "Q3")
colnames(y) <- c(1:9, "Quart")
for (n in 3:5) {
j <- 1
for (i in 1:9) {
y[, i] <- quantile(1:n, probs=c(1/4, 3/4), type=i)
}
y[, 10] <- quart(1:n)
cat("\n", n, ":\n")
print(y, digits=2)
} When you run this and check, you will find that the textbook values do not agree with any of the R output for all three sample sizes. (The pattern of disagreements continues in cycles of period three, showing that the problem persists no matter how large the sample may be.) The textbook might have misconstrued John Tukey's method of computing "hinges" (aka "fourths"). The difference is that when splitting the dataset around the median, he includes the median in both halves. That would produce $9.5$ and $28$ for the example dataset. | {
"source": [
"https://stats.stackexchange.com/questions/134229",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/-1/"
]
} |
134,282 | Principal component analysis (PCA) is usually explained via an eigen-decomposition of the covariance matrix. However, it can also be performed via singular value decomposition (SVD) of the data matrix $\mathbf X$. How does it work? What is the connection between these two approaches? What is the relationship between SVD and PCA? Or in other words, how to use SVD of the data matrix to perform dimensionality reduction? | Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered , i.e. column means have been subtracted and are now equal to zero. Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$ . It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components , also known as PC scores ; these can be seen as new, transformed, variables. The $j$ -th principal component is given by $j$ -th column of $\mathbf {XV}$ . The coordinates of the $i$ -th data point in the new PC space are given by the $i$ -th row of $\mathbf{XV}$ . If we now perform singular value decomposition of $\mathbf X$ , we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\mathbf S$ is the diagonal matrix of singular values $s_i$ and $\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$ . Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$ . To summarize: If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$ , then the columns of $\mathbf V$ are principal directions/axes (eigenvectors). Columns of $\mathbf {US}$ are principal components ("scores"). Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$ . Eigenvalues $\lambda_i$ show variances of the respective PCs. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$ . See e.g. here and here for why "loadings" should not be confused with principal directions. The above is correct only if $\mathbf X$ is centered. Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$ . The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations. To reduce the dimensionality of the data from $p$ to $k<p$ , select $k$ first columns of $\mathbf U$ , and $k\times k$ upper-left part of $\mathbf S$ . Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is of lower rank (of rank $k$ ). This matrix $\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here . Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an economy size (or thin ) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$ . Further links What is the intuitive relationship between SVD and PCA -- a very popular and very similar thread on math.SE. Why PCA of data by means of SVD of the data? -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability]. PCA and Correspondence analysis in their relation to Biplot -- PCA in the context of some congeneric techniques, all based on SVD. Is there any advantage of SVD over PCA? -- a question asking if there any benefits in using SVD instead of PCA [short answer: ill-posed question]. Making sense of principal component analysis, eigenvectors & eigenvalues -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here: | {
"source": [
"https://stats.stackexchange.com/questions/134282",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28666/"
]
} |
134,293 | Let's say I want to write a simulation for the table below to decide if Xylitol treatment and ear infections are independent. How would I go about doing this? | Let the real values data matrix $\mathbf X$ be of $n \times p$ size, where $n$ is the number of samples and $p$ is the number of variables. Let us assume that it is centered , i.e. column means have been subtracted and are now equal to zero. Then the $p \times p$ covariance matrix $\mathbf C$ is given by $\mathbf C = \mathbf X^\top \mathbf X/(n-1)$ . It is a symmetric matrix and so it can be diagonalized: $$\mathbf C = \mathbf V \mathbf L \mathbf V^\top,$$ where $\mathbf V$ is a matrix of eigenvectors (each column is an eigenvector) and $\mathbf L$ is a diagonal matrix with eigenvalues $\lambda_i$ in the decreasing order on the diagonal. The eigenvectors are called principal axes or principal directions of the data. Projections of the data on the principal axes are called principal components , also known as PC scores ; these can be seen as new, transformed, variables. The $j$ -th principal component is given by $j$ -th column of $\mathbf {XV}$ . The coordinates of the $i$ -th data point in the new PC space are given by the $i$ -th row of $\mathbf{XV}$ . If we now perform singular value decomposition of $\mathbf X$ , we obtain a decomposition $$\mathbf X = \mathbf U \mathbf S \mathbf V^\top,$$ where $\mathbf U$ is a unitary matrix (with columns called left singular vectors), $\mathbf S$ is the diagonal matrix of singular values $s_i$ and $\mathbf V$ columns are called right singular vectors. From here one can easily see that $$\mathbf C = \mathbf V \mathbf S \mathbf U^\top \mathbf U \mathbf S \mathbf V^\top /(n-1) = \mathbf V \frac{\mathbf S^2}{n-1}\mathbf V^\top,$$ meaning that right singular vectors $\mathbf V$ are principal directions (eigenvectors) and that singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$ . Principal components are given by $\mathbf X \mathbf V = \mathbf U \mathbf S \mathbf V^\top \mathbf V = \mathbf U \mathbf S$ . To summarize: If $\mathbf X = \mathbf U \mathbf S \mathbf V^\top$ , then the columns of $\mathbf V$ are principal directions/axes (eigenvectors). Columns of $\mathbf {US}$ are principal components ("scores"). Singular values are related to the eigenvalues of covariance matrix via $\lambda_i = s_i^2/(n-1)$ . Eigenvalues $\lambda_i$ show variances of the respective PCs. Standardized scores are given by columns of $\sqrt{n-1}\mathbf U$ and loadings are given by columns of $\mathbf V \mathbf S/\sqrt{n-1}$ . See e.g. here and here for why "loadings" should not be confused with principal directions. The above is correct only if $\mathbf X$ is centered. Only then is covariance matrix equal to $\mathbf X^\top \mathbf X/(n-1)$ . The above is correct only for $\mathbf X$ having samples in rows and variables in columns. If variables are in rows and samples in columns, then $\mathbf U$ and $\mathbf V$ exchange interpretations. If one wants to perform PCA on a correlation matrix (instead of a covariance matrix), then columns of $\mathbf X$ should not only be centered, but standardized as well, i.e. divided by their standard deviations. To reduce the dimensionality of the data from $p$ to $k<p$ , select $k$ first columns of $\mathbf U$ , and $k\times k$ upper-left part of $\mathbf S$ . Their product $\mathbf U_k \mathbf S_k$ is the required $n \times k$ matrix containing first $k$ PCs. Further multiplying the first $k$ PCs by the corresponding principal axes $\mathbf V_k^\top$ yields $\mathbf X_k = \mathbf U_k^\vphantom \top \mathbf S_k^\vphantom \top \mathbf V_k^\top$ matrix that has the original $n \times p$ size but is of lower rank (of rank $k$ ). This matrix $\mathbf X_k$ provides a reconstruction of the original data from the first $k$ PCs. It has the lowest possible reconstruction error, see my answer here . Strictly speaking, $\mathbf U$ is of $n\times n$ size and $\mathbf V$ is of $p \times p$ size. However, if $n>p$ then the last $n-p$ columns of $\mathbf U$ are arbitrary (and corresponding rows of $\mathbf S$ are constant zero); one should therefore use an economy size (or thin ) SVD that returns $\mathbf U$ of $n\times p$ size, dropping the useless columns. For large $n\gg p$ the matrix $\mathbf U$ would otherwise be unnecessarily huge. The same applies for an opposite situation of $n\ll p$ . Further links What is the intuitive relationship between SVD and PCA -- a very popular and very similar thread on math.SE. Why PCA of data by means of SVD of the data? -- a discussion of what are the benefits of performing PCA via SVD [short answer: numerical stability]. PCA and Correspondence analysis in their relation to Biplot -- PCA in the context of some congeneric techniques, all based on SVD. Is there any advantage of SVD over PCA? -- a question asking if there any benefits in using SVD instead of PCA [short answer: ill-posed question]. Making sense of principal component analysis, eigenvectors & eigenvalues -- my answer giving a non-technical explanation of PCA. To draw attention, I reproduce one figure here: | {
"source": [
"https://stats.stackexchange.com/questions/134293",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/67081/"
]
} |
134,380 | I was wondering if there is a way to tell the probability of something failing (a product) if we have 100,000 products in the field for 1 year and with no failures? What is the probability that one of the next 10,000 products sold fail? | The probability that a product will fail is surely a function of time and use. We don't have any data on use, and with only one year there are no failures (congratulations!). Thus, this aspect (called the survival function ), cannot be estimated from your data. You can think of failures within one year as draws from a binomial distribution , however. You still have no failures, but this is now a common problem. A simple solution is to use the rule of 3 , which is accurate with large $N$ (which you certainly have). Specifically, you can get the upper bound of a one-sided 95% confidence interval (i.e., the lower bound is $0$) on the true probability of failure within one year as $3/N$. In your case, you are 95% confident that the rate is less than $0.00003$. You also asked how to compute the probability that one or more of the next 10k fails. A quick and simple (albeit extreme) way to extend the above analysis is to just use the upper bound as the underlying probability and use the corresponding binomial CDF to get the probability that there won't be $0$ failures. Using R code, we could do: 1-pbinom(0, size=10000, prob=0.00003) , which yields a 0.2591851 chance of seeing one or more failures in the next 10k products. By having used the upper bound, this is not the optimal point estimate of the probability of having at least one failure, rather you can say it is very unlikely that the probability of $\ge 1$ failure is more than $\approx 26\%$ (recognizing that this is a somewhat 'hand-wavy' framing). Another possibility is to use @amoeba's suggestion of the estimate from Laplace's rule of succession . The rule of succession states that the estimated probability of failure is $(F+1)/(N+2)$, where $F$ is the number of failures. In that case, $\hat p = 9.9998\times 10^{-06}$, and the calculation for the predicted probability of $1^+$ failures in the next 10,000 is 1-pbinom(0, size=10000, prob=9.9998e-06) , yielding 0.09516122 , or $\approx 10\%$. | {
"source": [
"https://stats.stackexchange.com/questions/134380",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/67131/"
]
} |
134,701 | What is the intuitive difference between a random variable converging in probability versus a random variable converging in distribution? I've read numerous definitions and mathematical equations, but that does not really help. (Please keep in mind, I am undergraduate student studying econometrics.) How can a random variable converge to a single number, but also converge to a distribution? | How can a random number converge to a constant? Let's say you have $N$ balls in the box. You can pick them one by one. After you picked $k$ balls, I ask you: what's the mean weight of the balls in the box? Your best answer would be $\bar x_k=\frac{1}{k}\sum_{i=1}^kx_i$. You realize that $\bar x_k$ itself is the random value? It depends on which $k$ balls you picked first. Now, if you keep pulling the balls, at some point there'll be no balls left in the box, and you'll get $\bar x_N\equiv\mu$. So, what we've got is the random sequence $$\bar x_1,\dots,\bar x_k, \dots, \bar x_N ,\bar x_N, \bar x_N, \dots $$ which converges to the constant $\bar x_N = \mu$. So, the key to understanding your issue with convergence in probability is realizing that we're talking about a sequence of random variables, constructed in a certain way . Next, let's get uniform random numbers $e_1,e_2,\dots$, where $e_i\in [0,1]$. Let's look at the random sequence $\xi_1,\xi_2,\dots$, where $\xi_k=\frac{1}{\sqrt{\frac{k}{12}}}\sum_{i=1}^k \left(e_i- \frac{1}{2} \right)$. The $\xi_k$ is a random value, because all its terms are random values. We can't predict what is $\xi_k$ going to be. However, it turns out that we can claim that the probability distributions of $\xi_k$ will look more and more like the standard normal $\mathcal{N}(0,1)$. That's how the distributions converge. | {
"source": [
"https://stats.stackexchange.com/questions/134701",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/24243/"
]
} |
135,061 | I have a question related to modeling short time-series. It is not a question if to model them , but how. What method would you recommend for modeling (very) short time-series (say of length $T \leq 20$)? By "best" I mean here the most robust one, that is the least prone to errors due the fact of limited numbers of observations. With short series single observations could influence the forecast, so the method should provide a cautious estimate of errors and possible variability connected to the forecast. I am generally interested in univariate time-series but it would be also interesting to know about other methods. | It is very common for extremely simple forecasting methods like "forecast the historical average" to outperform more complex methods. This is even more likely for short time series. Yes, in principle you can fit an ARIMA or even more complex model to 20 or fewer observations, but you will be rather likely to overfit and get very bad forecasts. So: start with a simple benchmark, e.g., the historical mean the historical median for added robustness the random walk (forecast the last observation out) Assess these on out-of-sample data. Compare any more complex model to these benchmarks. You may be surprised at seeing how hard it is to outperform these simple methods. In addition, compare the robustness of different methods to these simple ones, e.g., by not only assessing average accuracy out-of-sample, but also the error variance , using your favorite error measure . Yes, as Rob Hyndman writes in his post that Aleksandr links to , out-of-sample testing is a problem in itself for short series - but there really is no good alternative. ( Don't use in-sample fit, which is no guide to forecasting accuracy .) The AIC won't help you with the median and the random walk. However, you could use time-series cross-validation , which AIC approximates, anyway. | {
"source": [
"https://stats.stackexchange.com/questions/135061",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35989/"
]
} |
135,124 | I wish to create a toy survival (time to event) data which is right censored and follows some distribution with proportional hazards and constant baseline hazard. I created the data as follows, but I am unable to obtain estimated hazard ratios that are close to the true values after fitting a Cox proportional hazards model to the simulated data. What did I do wrong? R codes: library(survival)
#set parameters
set.seed(1234)
n = 40000 #sample size
#functional relationship
lambda=0.000020 #constant baseline hazard 2 per 100000 per 1 unit time
b_haz <-function(t) #baseline hazard
{
lambda #constant hazard wrt time
}
x = cbind(hba1c=rnorm(n,2,.5)-2,age=rnorm(n,40,5)-40,duration=rnorm(n,10,2)-10)
B = c(1.1,1.2,1.3) # hazard ratios (model coefficients)
hist(x %*% B) #distribution of scores
haz <-function(t) #hazard function
{
b_haz(t) * exp(x %*% B)
}
c_hf <-function(t) #cumulative hazards function
{
exp(x %*% B) * lambda * t
}
S <- function(t) #survival function
{
exp(-c_hf(t))
}
S(.005)
S(1)
S(5)
#simulate censoring
time = rnorm(n,10,2)
S_prob = S(time)
#simulate events
event = ifelse(runif(1)>S_prob,1,0)
#model fit
km = survfit(Surv(time,event)~1,data=data.frame(x))
plot(km) #kaplan-meier plot
#Cox PH model
fit = coxph(Surv(time,event)~ hba1c+age+duration, data=data.frame(x))
summary(fit)
cox.zph(fit) Results: Call:
coxph(formula = Surv(time, event) ~ hba1c + age + duration, data = data.frame(x))
n= 40000, number of events= 3043
coef exp(coef) se(coef) z Pr(>|z|)
hba1c 0.236479 1.266780 0.035612 6.64 3.13e-11 ***
age 0.351304 1.420919 0.003792 92.63 < 2e-16 ***
duration 0.356629 1.428506 0.008952 39.84 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
exp(coef) exp(-coef) lower .95 upper .95
hba1c 1.267 0.7894 1.181 1.358
age 1.421 0.7038 1.410 1.432
duration 1.429 0.7000 1.404 1.454
Concordance= 0.964 (se = 0.006 )
Rsquare= 0.239 (max possible= 0.767 )
Likelihood ratio test= 10926 on 3 df, p=0
Wald test = 10568 on 3 df, p=0
Score (logrank) test = 11041 on 3 df, p=0 but true values are set as B = c(1.1,1.2,1.3) # hazard ratios (model coefficients) | It is not clear to me how you generate your event times (which, in your case, might be $<0$) and event indicators: time = rnorm(n,10,2)
S_prob = S(time)
event = ifelse(runif(1)>S_prob,1,0) So here is a generic method, followed by some R code. Generating survival times to simulate Cox proportional hazards models To generate event times from the proportional hazards model, we can use the inverse probability method (Bender et al., 2005) : if $V$ is uniform on $(0, 1)$ and if $S(\cdot \,|\, \mathbf{x})$ is the conditional survival function derived from the proportional hazards model, i.e.
$$
S(t \,|\, \mathbf{x}) = \exp \left( -H_0(t) \exp(\mathbf{x}^\prime \mathbf{\beta}) \vphantom{\Big(} \right)
$$
then it is a fact that the random variable
$$
T = S^{-1}(V \,|\, \mathbf{x}) = H_0^{-1} \left( - \frac{\log(V)}{\exp(\mathbf{x}^\prime \mathbf{\beta})} \right)
$$
has survival function $S(\cdot \,|\, \mathbf{x})$. This result is known as ``the inverse probability integral transformation''. Therefore, to generate a survival time $T \sim S(\cdot \,|\, \mathbf{x})$ given the covariate vector, it suffices to draw $v$ from $V \sim \mathrm{U}(0, 1)$ and to make the inverse transformation $t = S^{-1}(v \,|\, \mathbf{x})$. Example [Weibull baseline hazard] Let $h_0(t) = \lambda \rho t^{\rho - 1}$ with shape $\rho > 0$ and scale $\lambda > 0$. Then $H_0(t) = \lambda t^\rho$ and $H^{-1}_0(t) = (\frac{t}{\lambda})^{\frac{1}{\rho}}$. Following the inverse probability method, a realisation of $T \sim S(\cdot \,|\, \mathbf{x})$ is obtained by computing
$$
t = \left( - \frac{\log(v)}{\lambda \exp(\mathbf{x}^\prime \mathbf{\beta})} \right)^{\frac{1}{\rho}}
$$
with $v$ a uniform variate on $(0, 1)$. Using results on transformations of random variables, one may notice that $T$ has a conditional Weibull distribution (given $\mathbf{x}$) with shape $\rho$ and scale $\lambda \exp(\mathbf{x}^\prime \mathbf{\beta})$. R code The following R function generates a data set with a single binary covariate $x$ (e.g. a treatment indicator). The baseline hazard has a Weibull form. Censoring times are randomly drawn from an exponential distribution. # baseline hazard: Weibull
# N = sample size
# lambda = scale parameter in h0()
# rho = shape parameter in h0()
# beta = fixed effect parameter
# rateC = rate parameter of the exponential distribution of C
simulWeib <- function(N, lambda, rho, beta, rateC)
{
# covariate --> N Bernoulli trials
x <- sample(x=c(0, 1), size=N, replace=TRUE, prob=c(0.5, 0.5))
# Weibull latent event times
v <- runif(n=N)
Tlat <- (- log(v) / (lambda * exp(x * beta)))^(1 / rho)
# censoring times
C <- rexp(n=N, rate=rateC)
# follow-up times and event indicators
time <- pmin(Tlat, C)
status <- as.numeric(Tlat <= C)
# data set
data.frame(id=1:N,
time=time,
status=status,
x=x)
} Test Here is some quick simulation with $\beta = -0.6$: set.seed(1234)
betaHat <- rep(NA, 1e3)
for(k in 1:1e3)
{
dat <- simulWeib(N=100, lambda=0.01, rho=1, beta=-0.6, rateC=0.001)
fit <- coxph(Surv(time, status) ~ x, data=dat)
betaHat[k] <- fit$coef
}
> mean(betaHat)
[1] -0.6085473 | {
"source": [
"https://stats.stackexchange.com/questions/135124",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31157/"
]
} |
135,475 | Why are “time series” called such? Series means sum of a sequence. Why is it time Series, not time sequence? Is time the independent variable? | Why is it "Time Series", not "Time Sequence"? This inconsistency bugged me too the first time I saw it! But note that outside mathematics, people often use "series" to refer to what mathematicians might call a sequence. For example, the Oxford English dictionary online gives the main definition of "series" as a "number of events, objects, or people of a similar or related kind coming one after another". This is what is happening in a time series: you have a sequence of observations coming one after the other. This is equivalent to the usage of the word in such phrases as "TV series" (one episode after another), "series circuit" (the current flows through each component successively), the World Series (a sequence of baseball games one after the other) and so on. The etymology of "series" comes from the early 17th century, "from Latin, literally 'row, chain', from serere 'join, connect'", which is quite instructive. It didn't originally have the meaning of summation, but I can't find separate citations that establish when the word "series" was first used for the sum of the terms in a sequence. In fact it's quite common, particularly in older mathematics textbooks, to see the word "series" used where you might prefer "sequence", and "sum of series" where you might prefer "series". I don't know when this terminology was standardised in its present form. Here's an extract on arithmetic and geometric progressions from Daboll's Schoolmaster's assistant, improved and enlarged being a plain practical system of arithmetic: adapted to the United States - Nathan Daboll 's 1814 update to his 1799 original Daboll's schoolmaster's assistant: being a plain, practical system of arithmetic, adapted to the United States , which was one of the most popular mathematics education books in the US throughout much of the 19th century. The whole of Daboll's Schoolmaster's Assistant is available at archive.org and makes fascinating reading; it is the mathematics textbook that Herman Melville refers to in Moby-Dick (1851) and according to The Historical Roots of Elementary Mathematics by Bunt, Jones and Bedient (Dover Books, 1988) was predominant in American schools until 1850. At some point I may check some later standard texts; I do not think the hard distinction between "sequence" and "series" in mathematics arose until rather later. Is time the independent variable? This is basically the right idea: for instance when you plot a time series, we normally show the observations on the vertical axis while the horizontal axis represents time elapsed. And certainly it's true you wouldn't regard time as a dependent variable, since that would make no sense from a causation point of view. Your observations depend on time, and not vice versa. But note that "time" is usually referred to by an index number to signify the position of the observation ($X_1, X_2, X_3, ...$) rather than by a particular year/date/time - we don't generally see things like $X_\text{1 Jan 1998}, X_\text{2 Jan 1998}, X_\text{3 Jan 1998},...$. Also the time series $X_1, X_2, X_3, ...$ is considered univariate , meaning "one variable". This is in contrast to performing a bivariate ("two variable") regression analysis of your observed values, $X$, against time, $t$. There you would consider your data set as built out of two variables $X_1, X_2, X_3, ...$ against $t_1, t_2, t_3, ...$. In a time series, time is generally represented just by the index number (position in the sequence), not a separate variable in its own right. | {
"source": [
"https://stats.stackexchange.com/questions/135475",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/31466/"
]
} |
135,665 | So this is a very simple and basic question. However, when I was in school, I paid very little attention to the whole concept of simulations in class and that's left me a little terrified of that process. Can you explain the simulation process in laymen terms? (could be for generating data, regression coefficients, etc) What are some practical situations/problems when one would use simulations? I would prefer any examples given to be in R. | A quantitative model emulates some behavior of the world by (a) representing objects by some of their numerical properties and (b) combining those numbers in a definite way to produce numerical outputs that also represent properties of interest. In this schematic, three numerical inputs on the left are combined to produce one numerical output on the right. The number lines indicate possible values of the inputs and output; the dots show specific values in use. Nowadays digital computers usually perform the calculations, but they are not essential: models have been calculated with pencil-and-paper or by building "analog" devices in wood, metal, and electronic circuits. As an example, perhaps the preceding model sums its three inputs. R code for this model might look like inputs <- c(-1.3, 1.2, 0) # Specify inputs (three numbers)
output <- sum(inputs) # Run the model
print(output) # Display the output (a number) Its output simply is a number, -0.1 We cannot know the world perfectly: even if the model happens to work exactly the way the world does, our information is imperfect and things in the world vary. (Stochastic) simulations help us understand how such uncertainty and variation in the model inputs ought to translate into uncertainty and variation in the outputs. They do so by varying the inputs randomly, running the model for each variation, and summarizing the collective output. "Randomly" does not mean arbitrarily. The modeler must specify (whether knowingly or not, whether explicitly or implicitly) the intended frequencies of all the inputs. The frequencies of the outputs provide the most detailed summary of the results. The same model, shown with random inputs and the resulting (computed) random output. The figure displays frequencies with histograms to represent distributions of numbers. The intended input frequencies are shown for the inputs at left, while the computed output frequency, obtained by running the model many times, is shown at right. Each set of inputs to a deterministic model produces a predictable numeric output. When the model is used in a stochastic simulation, however, the output is a distribution (such as the long gray one shown at right). The spread of the output distribution tells us how the model outputs can be expected to vary when its inputs vary. The preceding code example might be modified like this to turn it into a simulation: n <- 1e5 # Number of iterations
inputs <- rbind(rgamma(n, 3, 3) - 2,
runif(n, -2, 2),
rnorm(n, 0, 1/2))
output <- apply(inputs, 2, sum)
hist(output, freq=FALSE, col="Gray") Its output has been summarized with a histogram of all the numbers generated by iterating the model with these random inputs: Peering behind the scenes, we may inspect some of the many random inputs that were passed to this model: rownames(inputs) <- c("First", "Second", "Third")
print(inputs[, 1:5], digits=2) The output shows the first five out of $100,000$ iterations, with one column per iteration: [,1] [,2] [,3] [,4] [,5]
First -1.62 -0.72 -1.11 -1.57 -1.25
Second 0.52 0.67 0.92 1.54 0.24
Third -0.39 1.45 0.74 -0.48 0.33 Arguably, the answer to the second question is that simulations can be used everywhere. As a practical matter, the expected cost of running the simulation should be less than the likely benefit. What are the benefits of understanding and quantifying variability? There are two primary areas where this is important: Seeking the truth , as in science and the law. A number by itself is useful, but it is far more useful to know how accurate or certain that number is. Making decisions, as in business and daily life. Decisions balance risks and benefits. Risks depend on the possibility of bad outcomes. Stochastic simulations help assess that possibility. Computing systems have become powerful enough to execute realistic, complex models repeatedly. Software has evolved to support generating and summarizing random values quickly and easily (as the second R example shows). These two factors have combined over the last 20 years (and more) to the point where simulation is routine. What remains is to help people (1) specify appropriate distributions of inputs and (2) understand the distribution of outputs. That is the domain of human thought, where computers so far have been little help. | {
"source": [
"https://stats.stackexchange.com/questions/135665",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29640/"
]
} |
136,671 | I have labored under the belief that the sample median is more robust measure of central tendency than the sample mean, since it ignores outliers. I was therefore surprised to learn (in the answer to another question ) that for samples drawn from a normal distribution, the variance of the sample mean is less than the variance of the sample median (at least for large $n$). I understand mathematically why this is true. Is there a "philosophical" way of looking at this that that would help with intuition about when to use the median rather than the mean for other distributions? Are there mathematical tools that help quickly answer the question for a particular distribution? | Let's assume we restrict consideration to symmetric distributions where the mean and variance are finite (so the Cauchy, for example, is excluded from consideration). Further, I'm going to limit myself initially to continuous unimodal cases, and indeed mostly to 'nice' situations (though I might come back later and discuss some other cases). The relative variance depends on sample size. It's common to discuss the ratio of ( $n$ times the) the asymptotic variances, but we should keep in mind that at smaller sample sizes the situation will be somewhat different. (The median sometimes does noticeably better or worse than its asymptotic behaviour would suggest. For example, at the normal with $n=3$ it has an efficiency of about 74% rather than 63%. The asymptotic behavior is generally a good guide at quite moderate sample sizes, though.) The asymptotics are fairly easy to deal with: Mean: $n\times$ variance = $\sigma^2$ . Median : $n\times$ variance = $\frac{1}{[4f(m)^2]}$ where $f(m)$ is the height of the density at the median. So if $f(m)>\frac{1}{2\sigma}$ , the median will be asymptotically more efficient. [In the normal case, $f(m)=
\frac{1}{\sqrt{2\pi}\sigma}$ , so $\frac{1}{[4f(m)^2]}=\frac{\pi\sigma^2}{2}$ , whence the asymptotic relative efficiency of $2/\pi$ )] We can see that the variance of the median will depend on the behaviour of the density very near the center, while the variance of the mean depends on the variance of the original distribution (which in some sense is affected by the density everywhere, and in particular, more by the way it behaves further away from the center) Which is to say, while the median is less affected by outliers than the mean, and we often see that it has lower variance than the mean when the distribution is heavy tailed (which does produce more outliers), what really drives the performance of the median is inliers . It often happens that (for a fixed variance) there's a tendency for the two to go together. That is, broadly speaking, as the tail gets heavier, there's a tendency for (at a fixed value of $\sigma^2$ ) the distribution to get "peakier" at the same time (more kurtotic , in Pearson's original, if loose, sense). This is not, however, a certain thing - it tends to be the case across a broad range of commonly considered densities, but it doesn't always hold. When it does hold, the variance of the median will reduce (because the distribution has more probability in the immediate neighborhood of the median), while the variance of the mean is held constant (because we fixed $\sigma^2$ ). So across a variety of common cases the median will often tend to do "better" than the mean when the tail is heavy, (but we must keep in mind that it's relatively easy to construct counterexamples). So we can consider a few cases, which can show us what we often see, but we shouldn't read too much into them, because heavier tail doesn't universally go with higher peak. We know the median is about 63.7% as efficient (for $n$ large) as the mean at the normal. What about, say a logistic distribution, which like the normal is approximately parabolic about the center, but has heavier tails (as $x$ becomes large, they become exponential). If we take the scale parameter to be 1, the logistic has variance $\pi^2/3$ and height at the median of 1/4, so $\frac{1}{4f(m)^2}=4$ . The ratio of variances is then $\pi^2/12\approx 0.82$ so in large samples, the median is roughly 82% as efficient as the mean. Let's consider two other densities with exponential-like tails, but different peakedness. First, the hyperbolic secant ( $\text{sech}$ ) distribution , for which the standard form has variance 1 and height at the center of $\frac{1}{2}$ , so the ratio of asymptotic variances is 1 (the two are equally efficient in large samples). However, in small samples the mean is more efficient (its variance is about 95% of that for the median when $n=5$ , for example). Here we can see how, as we progress through those three densities (holding variance constant), that the height at the median increases: Can we make it go still higher? Indeed we can. Consider, for example, the double exponential . The standard form has variance 2, and the height at the median is $\frac{1}{2}$ (so if we scale to unit variance as in the diagram, the peak is at $\frac{1}{\sqrt{2}}$ , just above 0.7). The asymptotic variance of the median is half that of the mean. If we make the distribution peakier still for a given variance, (perhaps by making the tail heavier than exponential), the median can be far more efficient (relatively speaking) still. There's really no limit to how high that peak can go. If we had instead used examples from say the t-distributions, broadly similar effects would be seen, but the progression would be different; the crossover point is a little below $\nu=5$ df (actually around 4.68) -- for smaller df the median is more efficient (asymptotically), for large df the mean is. ... At finite sample sizes, it's sometimes possible to compute the variance of the distribution of the median explicitly. Where that's not feasible - or even just inconvenient - we can use simulation to compute the variance of the median (or the ratio of the variance*) across random samples drawn from the distribution (which is what I did to get the small sample figures above). * Even though we often don't actually need the variance of the mean, since we can compute it if we know the variance of the distribution, it may be more computationally efficient to do so, since it acts like a control variate (the mean and median are often quite correlated). | {
"source": [
"https://stats.stackexchange.com/questions/136671",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59602/"
]
} |
136,870 | I assume the following is true: assuming a fair coin, getting 10 heads in a row whilst tossing a coin does not increase the chance of the next coin toss being a tail , no matter what amount of probability and/or statistical jargon is tossed around (excuse the puns). Assuming that is the case, my question is this: how the hell do I convince someone that is the case? They are smart and educated but seem determined not to consider that I might be in the right on this (argument). | they are trying to assert that [...] if there have been 10 heads, then the next in the sequence will more likely be a tail because statistics says it will balance out in the end There's only a "balancing out" in a very particular sense. If it's a fair coin, then it's still 50-50 at every toss. The coin cannot know its past . It cannot know there was an excess of heads. It cannot compensate for its past. Ever . it just goes on randomly being heads or tails with constant chance of a head. If $n_H$ is the number of heads in $n=n_H+n_T$ tosses ($n_T$ is the number of tails), for a fair coin, $n_H/n_T$ will tend to 1, as $n_H+n_T$ goes to infinity .... but $|n_H-n_T|$ doesn't go to 0. In fact, it also goes to infinity! That is, nothing acts to make them more even. The counts don't tend toward "balancing out". On average, imbalance between the count of heads and tails actually grows! Here's the result of 100 sets of 1000 tosses, with the grey traces showing the difference in number of head minus number of tails at every step. The grey traces (representing $n_H-n_T$) are a Bernoulli random walk. If you think of a particle moving up or down the y-axis by a unit step (randomly with equal probability) at each time-step, then the distribution of the position of the particle will 'diffuse' away from 0 over time. It still has 0 expected value, but its expected distance from 0 grows as the square root of the number of time steps. [Note for anyone thinking " is he talking about expected absolute difference or the RMS difference " -- actually either: for large $n$ the first is $\sqrt{2/\pi}\approx$ 80% of the second.] The blue curve above is at $\pm \sqrt{n}$ and the green curve is at $\pm 2\sqrt{n}$. As you see, the typical distance between total heads and total tails grows. If there was anything acting to 'restore to equality' - to 'make up for' deviations from equality - they wouldn't tend to typically grow further apart like that. (It's not hard to show this algebraically, but I doubt that would convince your friend. The critical part is that the variance of a sum of independent random variables is the sum of the variances $<$see the end of the linked section$>$ -- every time you add another coin flip, you add a constant amount onto the variance of the sum... so variance must grow proportionally with $n$. Consequently the standard deviation increases with $\sqrt{n}$. The constant that gets added to variance at each step in this case happens to be 1, but that's not crucial to the argument.) Equivalently, $\frac{|n_H-n_T|}{n_H+n_T}$ does go to $0$ as the total tosses goes to infinity, but only because $n_H+n_T$ goes to infinity a lot faster than $|n_H-n_T|$ does. That means if we divide that cumulative count by $n$ at each step, it curves in -- the typical absolute difference in count is of the order of $\sqrt{n}$, but the typical absolute difference in proportion must then be of the order of $1/\sqrt{n}$. That's all that's going on. The increasingly-large* random deviations from equality are just " washed out " by the even bigger denominator. * increasing in typical absolute size See the little animation in the margin, here If your friend is unconvinced, toss some coins. Every time you get say three heads in a row, get him or her to nominate a probability for a head on the next toss (that's less than 50%) that he thinks must be fair by his reasoning. Ask for them to give you the corresponding odds (that is, he or she must be willing to pay a bit more than 1:1 if you bet on heads, since they insist that tails is more likely). It's best if it's set up as a lot of bets each for a small amount of money. (Don't be surprised if there's some excuse as to why they can't take up their half of the bet -- but it does at least seem to dramatically reduce the vehemence with which the position is held.) [However, all this discussion is predicated on the coin being fair. If the coin wasn't fair (50-50), then a different version of the discussion - based around deviations from the expected proportion-difference would be required. Having 10 heads in 10 tosses might make you suspicious of the assumption of p=0.5. A well tossed coin should be close to fair - weighted or not - but in fact still exhibit small but exploitable bias , especially if the person exploiting it is someone like Persi Diaconis. Spun coins on the other hand, may be quite susceptible to bias due to more weight on one face.] | {
"source": [
"https://stats.stackexchange.com/questions/136870",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68492/"
]
} |
137,702 | I've been reading up on $p$-values, type 1 error rates, significance levels, power calculations, effect sizes and the Fisher vs Neyman-Pearson debate. This has left me feeling a bit overwhelmed. I apologise for the wall of text, but I felt it was necessary to provide an overview of my current understanding of these concepts, before I moved on to my actual questions. From what I've gathered, a $p$-value is simply a measure of surprise, the probability of obtaining a result at least as extreme, given that the null hypothesis is true. Fisher originally intended for it to be a continuous measure. In the Neyman-Pearson framework, you select a significance level in advance and use this as an (arbitrary) cut-off point. The significance level is equal to the type 1 error rate. It is defined by the long run frequency, i.e. if you were to repeat an experiment 1000 times and the null hypothesis is true, about 50 of those experiments would result in a significant effect, due to the sampling variability. By choosing a significance level, we are guarding ourselves against these false positives with a certain probability. $P$-values traditionally do not appear in this framework. If we find a $p$-value of 0.01 this does not mean that the type 1 error rate is 0.01, the type 1 error is stated a priori. I believe this is one of the major arguments in the Fisher vs N-P debate, because $p$-values are often reported as 0.05*, 0.01**, 0.001***. This could mislead people into saying that the effect is significant at a certain $p$-value, instead of at a certain significance value. I also realise that the $p$-value is a function of the sample size. Therefore, it cannot be used as an absolute measurement. A small $p$-value could point to a small, non-relevant effect in a large sample experiment. To counter this, it is important to perform an power/effect size calculation when determining the sample size for your experiment. $P$-values tell us whether there is an effect, not how large it is. See Sullivan 2012 . My question: How can I reconcile the facts that the $p$-value is a measure of surprise (smaller = more convincing) while at the same time it cannot be viewed as an absolute measurement? What I am confused about, is the following: can we be more confident in a small $p$-value than a large one? In the Fisherian sense, I would say yes, we are more surprised. In the N-P framework, choosing a smaller significance level would imply we are guarding ourselves more strongly against false positives. But on the other hand, $p$-values are dependent on sample size. They are not an absolute measure. Thus we cannot simply say 0.001593 is more significant than 0.0439. Yet this what would be implied in Fisher's framework: we would be more surprised to such an extreme value. There's even discussion about the term highly significant being a misnomer: Is it wrong to refer to results as being "highly significant"? I've heard that $p$-values in some fields of science are only considered important when they are smaller than 0.0001, whereas in other fields values around 0.01 are already considered highly significant. Related questions: Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? When to use Fisher and Neyman-Pearson framework? Is the exact value of a 'p-value' meaningless? Frequentist properties of p-values in relation to type I error Confidence intervals vs P-values for two means Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 (as provided by @amoeba) | Are smaller $p$-values "more convincing"? Yes, of course they are. In the Fisher framework, $p$-value is a quantification of the amount of evidence against the null hypothesis. The evidence can be more or less convincing; the smaller the $p$-value, the more convincing it is. Note that in any given experiment with fixed sample size $n$, the $p$-value is monotonically related to the effect size, as @Scortchi nicely points out in his answer (+1). So smaller $p$-values correspond to larger effect sizes; of course they are more convincing! In the Neyman-Pearson framework, the goal is to obtain a binary decision: either the evidence is "significant" or it is not. By choosing the threshold $\alpha$, we guarantee that we will not have more than $\alpha$ false positives. Note that different people can have different $\alpha$ in mind when looking at the same data; perhaps when I read a paper from a field that I am skeptical about, I would not personally consider as "significant" results with e.g. $p=0.03$ even though the authors do call them significant. My personal $\alpha$ might be set to $0.001$ or something. Obviously the lower the reported $p$-value, the more skeptical readers it will be able to convince! Hence, again, lower $p$-values are more convincing. The currently standard practice is to combine Fisher and Neyman-Pearson approaches: if $p<\alpha$, then the results are called "significant" and the $p$-value is [exactly or approximately] reported and used as a measure of convincingness (by marking it with stars, using expressions as "highly significant", etc.); if $p>\alpha$ , then the results are called "not significant" and that's it. This is usually referred to as a "hybrid approach", and indeed it is hybrid. Some people argue that this hybrid is incoherent; I tend to disagree. Why would it be invalid to do two valid things at the same time? Further reading: Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? -- my question about the "hybrid". It generated some discussion, but I am still not satisfied with any of the answers, and plan to get back to that thread at some point. Is it wrong to refer to results as being "highly significant"? -- see my yesterday's answer, which is essentially saying: it isn't wrong (but perhaps a bit sloppy). Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 -- an example of an anti-Fisher paper arguing that $p$-values do not provide evidence against the null; the top answer by @Momo does a good job in debunking the arguments. My answer to the title question is: But of course they are. | {
"source": [
"https://stats.stackexchange.com/questions/137702",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/62518/"
]
} |
137,711 | I recently had a disagreement with a friend about minimizing the chance of dying in a plane due to a crash. This is a rudimentary statistics question. He stated that he prefers to fly direct to a destination, as it decreases the probability that he will die in an airplane crash. His logic was that if the probability of a commercial airline crash is 1 in 10,000, flying on two planes to get to your destination would double your chance of death. My point was that each time one flies on an airplane, it does not increase the likelihood that he will die in a future airplane crash. That is, each airplane flight is independent. Whether someone has flown on 100 planes that year or just 1, both fliers still have a 1 in 10,000 chance of dying in a plane crash on their next flight. Another point I made: say your destination is 4 hours away. If you take a direct flight, you will be in the air, at risk of being in a crash, for 4 hours. Now say you take 4 different connecting flights, each flight about an hour long. In this scenario you will still be in the air for roughly 4 hours. Thus, whether you take the direct flight or save some money and take connecting flights, the amount of time you spend at risk is roughly equal. My final point was that shorter flights have a lower rate of crashes. I just pulled that one out of nowhere. I've done zero research and have zero data to back that up but...it seems logical. Who is in the right, and why? There's a lot at stake here. | Actual odds of planes crashing aside, you're falling into a logical trap here: ...each time one flies on an airplane, it does not increase the likelihood that he will die in a future airplane crash. This is completely correct: whether you've never flown before or you've flown thousands of times, the chance of dying is still (in your example) 0.0001. So if you're deciding between the two-hop and one-hop option, you're probably thinking about two scenarios: Future you, transferring between the two flights. Chance of dying on next flight: 0.0001. Future you, about to board the only flight. Chance of dying on next flight: 0.0001. Same thing, right? Well, only if you assume you lived through the first flight in the first case. Put another way, in option 1, you're actually already dead 1/10,000th of the time. The general issue is that you're confusing two scenarios: your probability of being alive after $N$ flights your probability of being alive after $N$ flights given that you were alive after $N-1$ flights . Your chances of surviving one flight are always $1 - 0.0001$, but overall, the chances of living to the end of $N$ flights are $(1 - 0.0001)^N$ The Opposition View : I tried to keep my answer on topic by pointing out the logical issue rather than digressing into the empirical ones. That said, in this case we may be letting the logic obscure the science. If your friend actually believes that skipping one flight will save him from a 1 in 10,000 chance of dying in a plane crash, the debate could be framed differently: Your statement: a two-hop flight gives you a 0.0001 chance of dying His statement: a two-hop flight gives a 0.0002 chance of dying If this is the debate, it turns out that you are more correct . The actual odds of dying in a plane crash are about 1 in 2 million in the worst case. So you're both completely wrong, in that your estimates of airline fatalities are crazy high, but he's about twice as wrong as you are. This 1 in 2 million figure is, of course, very rough and likely an overestimate. It's approximately correct to assume constant chances of dying per flight because (as many have pointed out) most accidents happen on takeoff and landing. If you really want the details, there's a lot more detail in another answer . Condensed version: Your friend is right about probability theory, but given the statistics he's crazy to modify his behavior. | {
"source": [
"https://stats.stackexchange.com/questions/137711",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68964/"
]
} |
137,965 | Can I use box-and-whisker plots also for multimodal distribution or only for unimodal distribution? | The problem is that the usual boxplot* generally can't give an indication of the number of modes. While in some (generally rare) circumstances it is possible to get a clear indication that the smallest number of modes exceeds 1, more usually a given boxplot is consistent with one or any larger number of modes. * several modifications of the usual kinds of boxplot have been suggested which do more to indicate changes in density and cam be used to identify multiple modes, but I don't think those are the purpose of this question. For example, while this plot does indicate the presence of at least two modes (the data were generated so as to have exactly two) - $\qquad\qquad $ conversely, this one has two very clear modes in its distribution but you simply can't tell that from the boxplot at all: Boxplots don't necessarily convey a lot of information about the distribution. In the absence of any marked points outside the whiskers, they contain only five values, and a five number summary doesn't pin down the distribution much . However, the first figure above shows a case where the cdf is sufficiently "pinned down" to essentially rule out a unimodal distribution (at least at the sample size of $n=$ 100) -- no unimodal cdf is consistent with the constraints on the cdf in that case, which require a relatively sharp rise in the first quarter, a flattening out to (on average) a small rate of increase in the middle half and then changing to another sharp rise in the last quarter. Indeed, we can see that the five-number summary doesn't tell us a great deal in general in figure 1 here (which I believe is a working paper later published in [1]) shows four different data sets with the same box plot. I don't have that data to hand, but it's a trivial matter to make a similar data set - as indicated in the link above related to the five-number summary, we need only constrain our distributions to lie within the rectangular boxes that the five number summary restricts us to. Here's R code which will generate similar data to that in the paper: x1 = qnorm(ppoints(1:100,a=-.072377))
x1 = x1/diff(range(x1))*18+10
b = fivenum(x1) # all of the data has this five number summary
x2 = qnorm(ppoints(1:48));x2=x2/diff(range(x2))*.6
x2 = c(b[1],x2+b[2],.31+b[2],b[4]-.31,x2+b[4],b[5])
d = .1183675; x3 = ((0:34)-34/2)/34*(9-d)+(5.5-d/2)
x3 = c(x3,rep(9.5,15),rep(10.5,15),20-x3)
x4 = c(1,rep(b[2],24),(0:49)/49*(b[4]-b[2])+b[2],(0:24)/24*(b[5]-b[4])+b[4]) Here's a similar display to that in the paper, of the above data (except I show all four boxplots here): There's a somewhat similar set of displays in Matejka & Fitzmaurice (2017)[2], though they don't seem to have a very skewed example like x4 (they do have some mildly skewed examples) - and they do have some trimodal examples not in [1]; the basic point of the examples is the same. Beware, however -- histograms can have problems, too ; indeed, we see one of its problems here, because the distribution in the third "peaked" histogram is actually distinctly bimodal; the histogram bin width is simply too wide to show it. Further, as Nick Cox points out in comments, kernel density estimates may also affect the impression of the number of modes (sometimes smearing out modes ... or sometimes suggesting small modes where none exist in the original distribution). One must take care with interpretation of many common displays. There are modifications of the boxplot that can better indicate multimodality (vase plots, violin plots and bean plots, among numerous others). In some situations they may be useful, but if I'm interested in finding modes I'll usually look at a different sort of display. Boxplots are better when interest focuses on comparisons of location and spread (and often perhaps to skewness $^\dagger$ ) rather than the particulars of distributional shape. If multimodality is important to show, I'd suggest looking at displays that are better at showing that - the precise choice of display depends on what you most want it to show well. $\dagger$ but not always - the fourth data set ( x4 ) in the example data above shows that you can easily have a distinctly skewed distribution with a perfectly symmetric boxplot. [1]: Choonpradub, C., & McNeil, D. (2005), "Can the boxplot be improved?" Songklanakarin J. Sci. Technol. , 27 :3, pp. 649-657. http://www.jourlib.org/paper/2081800 pdf [2]: Justin Matejka and George Fitzmaurice, (2017), "Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing". In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems ( CHI '17 ). Association for Computing Machinery, New York, NY, USA, 1290–1294. DOI: https://doi.org/10.1145/3025453.3025912 (See the pdf here ) | {
"source": [
"https://stats.stackexchange.com/questions/137965",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69085/"
]
} |
138,046 | I'm reading various papers and I don't understand the meaning of three types of normalizations used.
Let's say I have the number of calls $X_i(t)$ in region $i$ at time $t$. I see it normalized with: Z-score: $X_i(t) = (X_i(t)-\mu(t)) / \sigma(t)$. This takes somehow the "shape" of the calls time series. dividing by the mean: $X_i(t) = {X_i(t)}/{\mu(t)}$ ref . This is unknown by me ^^ subtracting the mean: $X_i(t) = X_i(t) - \mu(t)$. What's the difference between 2 and 3? Why should I divide by the mean and what's its "meaning"? | The difference between subtracting the mean and dividing by the mean is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery here, as it's no more than a statistical analogue of Bill is 5 cm taller than Betty (subtraction) Bill is twice the weight of his son Bob (division) with the difference that the mean is used as a reference level, rather than another value. We should emphasise that (Bill $-$ Betty) or (value $-$ mean) preserves units of measurement while (Bill / Bob) or (value / mean) is independent of units of measurement. and that subtraction of the mean is always possible, while division by the mean usually
only makes sense if the mean is guaranteed to be positive (or more widely that no two values have different signs and the mean cannot be zero). Taking it further then (value $-$ mean) / SD is scaling by the standard deviation and so again produces a measure independent of units of measurement, and also of the variability of the variable. It's always possible so long as the SD is positive, which does not bite. (If the SD were zero then every value is the same, and detailed summary is easy without any of these devices.) This kind of rescaling is often called standardization , although it is also true that that term too is overloaded. Note that subtraction of the mean (without or with division by SD) is just a change of units, so distribution plots and time series plots (which you ask about) look just the same before and after; the numeric axis labels will differ, but the shape is preserved. The choice is usually substantive rather than strictly statistical, so that it is question of which kind of adjustment is a helpful simplification, or indeed whether that is so. I'll add that your question points up in reverse a point often made on this forum that asking about normalization is futile unless a precise definition is offered; in fact, that are even more meanings in use than those you mentioned. The OP's context of space-time data is immaterial here; the principles apply regardless of whether you have temporal, spatial or spatial-temporal data. | {
"source": [
"https://stats.stackexchange.com/questions/138046",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/48622/"
]
} |
138,069 | A box contains $5$ white and $2$ black balls. A coin with unknown $P(Head)=p$ is tossed once. If it lands HEADS then a white ball is added, else a black ball is added to the box. Then a ball is selected at random from the box. Given that the ball drawn is WHITE , find the Maximum Likelihood Estimator of $p$ . I find this problem quite confusing, really. It seems to be pretty straightforward and hence I am shocked by the substandard quality, else I am making some serious error. My attempt is as follows: $P(White)=P(White|Head)P(Head)+P(White|Tail)P(Tail)=\dfrac{6}{8}.p+\dfrac{5}{8}(1-p)=\dfrac{p}{8}+\dfrac{5}{8}$ This is actually my likelihood of $p$ given the sample (my sample is WHITE ball). So this is maximized for $\hat{p}=1$ . So $1$ (????) is the MLE for $p$ . It is a constant estimator. This is kind of weird. Any suggestion/correction/explanation is welcome. | The difference between subtracting the mean and dividing by the mean is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery here, as it's no more than a statistical analogue of Bill is 5 cm taller than Betty (subtraction) Bill is twice the weight of his son Bob (division) with the difference that the mean is used as a reference level, rather than another value. We should emphasise that (Bill $-$ Betty) or (value $-$ mean) preserves units of measurement while (Bill / Bob) or (value / mean) is independent of units of measurement. and that subtraction of the mean is always possible, while division by the mean usually
only makes sense if the mean is guaranteed to be positive (or more widely that no two values have different signs and the mean cannot be zero). Taking it further then (value $-$ mean) / SD is scaling by the standard deviation and so again produces a measure independent of units of measurement, and also of the variability of the variable. It's always possible so long as the SD is positive, which does not bite. (If the SD were zero then every value is the same, and detailed summary is easy without any of these devices.) This kind of rescaling is often called standardization , although it is also true that that term too is overloaded. Note that subtraction of the mean (without or with division by SD) is just a change of units, so distribution plots and time series plots (which you ask about) look just the same before and after; the numeric axis labels will differ, but the shape is preserved. The choice is usually substantive rather than strictly statistical, so that it is question of which kind of adjustment is a helpful simplification, or indeed whether that is so. I'll add that your question points up in reverse a point often made on this forum that asking about normalization is futile unless a precise definition is offered; in fact, that are even more meanings in use than those you mentioned. The OP's context of space-time data is immaterial here; the principles apply regardless of whether you have temporal, spatial or spatial-temporal data. | {
"source": [
"https://stats.stackexchange.com/questions/138069",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/59485/"
]
} |
138,325 | I have a correlation matrix which states how every item is correlated to the other item. Hence for a N items, I already have a N*N correlation matrix. Using this correlation matrix how do I cluster the N items in M bins so that I can say that the Nk Items in the kth bin behave the same. Kindly help me out. All item values are categorical. Thanks. Do let me know if you need any more information. I need a solution in Python but any help in pushing me towards the requirements will be a big help. | Looks like a job for block modeling. Google for "block modeling" and the first few hits are helpful. Say we have a covariance matrix where N=100 and there are actually 5 clusters: What block modelling is trying to do is find an ordering of the rows, so that the clusters become apparent as 'blocks': Below is a code example that performs a basic greedy search to accomplish this. It's probably too slow for your 250-300 variables, but it's a start. See if you can follow along with the comments: import numpy as np
from matplotlib import pyplot as plt
# This generates 100 variables that could possibly be assigned to 5 clusters
n_variables = 100
n_clusters = 5
n_samples = 1000
# To keep this example simple, each cluster will have a fixed size
cluster_size = n_variables // n_clusters
# Assign each variable to a cluster
belongs_to_cluster = np.repeat(range(n_clusters), cluster_size)
np.random.shuffle(belongs_to_cluster)
# This latent data is used to make variables that belong
# to the same cluster correlated.
latent = np.random.randn(n_clusters, n_samples)
variables = []
for i in range(n_variables):
variables.append(
np.random.randn(n_samples) + latent[belongs_to_cluster[i], :]
)
variables = np.array(variables)
C = np.cov(variables)
def score(C):
'''
Function to assign a score to an ordered covariance matrix.
High correlations within a cluster improve the score.
High correlations between clusters decease the score.
'''
score = 0
for cluster in range(n_clusters):
inside_cluster = np.arange(cluster_size) + cluster * cluster_size
outside_cluster = np.setdiff1d(range(n_variables), inside_cluster)
# Belonging to the same cluster
score += np.sum(C[inside_cluster, :][:, inside_cluster])
# Belonging to different clusters
score -= np.sum(C[inside_cluster, :][:, outside_cluster])
score -= np.sum(C[outside_cluster, :][:, inside_cluster])
return score
initial_C = C
initial_score = score(C)
initial_ordering = np.arange(n_variables)
plt.figure()
plt.imshow(C, interpolation='nearest')
plt.title('Initial C')
print 'Initial ordering:', initial_ordering
print 'Initial covariance matrix score:', initial_score
# Pretty dumb greedy optimization algorithm that continuously
# swaps rows to improve the score
def swap_rows(C, var1, var2):
'''
Function to swap two rows in a covariance matrix,
updating the appropriate columns as well.
'''
D = C.copy()
D[var2, :] = C[var1, :]
D[var1, :] = C[var2, :]
E = D.copy()
E[:, var2] = D[:, var1]
E[:, var1] = D[:, var2]
return E
current_C = C
current_ordering = initial_ordering
current_score = initial_score
max_iter = 1000
for i in range(max_iter):
# Find the best row swap to make
best_C = current_C
best_ordering = current_ordering
best_score = current_score
for row1 in range(n_variables):
for row2 in range(n_variables):
if row1 == row2:
continue
option_ordering = best_ordering.copy()
option_ordering[row1] = best_ordering[row2]
option_ordering[row2] = best_ordering[row1]
option_C = swap_rows(best_C, row1, row2)
option_score = score(option_C)
if option_score > best_score:
best_C = option_C
best_ordering = option_ordering
best_score = option_score
if best_score > current_score:
# Perform the best row swap
current_C = best_C
current_ordering = best_ordering
current_score = best_score
else:
# No row swap found that improves the solution, we're done
break
# Output the result
plt.figure()
plt.imshow(current_C, interpolation='nearest')
plt.title('Best C')
print 'Best ordering:', current_ordering
print 'Best score:', current_score
print
print 'Cluster [variables assigned to this cluster]'
print '------------------------------------------------'
for cluster in range(n_clusters):
print 'Cluster %02d %s' % (cluster + 1, current_ordering[cluster*cluster_size:(cluster+1)*cluster_size]) | {
"source": [
"https://stats.stackexchange.com/questions/138325",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69301/"
]
} |
138,332 | I’m trying to do logistic regression, I utilize the following command: mylogit <- glm(Var0 ~Var1, data = mydata, family = "binomial") And I obtain a p-value of 0.003 After that I want to know the effect of Var2 and Var3 and I use the following command: mylogit <- glm(Var0 ~Var1+Var2+Var3, data = mydata, family = "binomial") obtaining a p-value of 0.993 My problem is that Var1 and Var2 are dependent and for that reason I obtain such p value.
Is there any method to indicate that Var1 and Var2 are dependent or I have to remove Var2 ? | Looks like a job for block modeling. Google for "block modeling" and the first few hits are helpful. Say we have a covariance matrix where N=100 and there are actually 5 clusters: What block modelling is trying to do is find an ordering of the rows, so that the clusters become apparent as 'blocks': Below is a code example that performs a basic greedy search to accomplish this. It's probably too slow for your 250-300 variables, but it's a start. See if you can follow along with the comments: import numpy as np
from matplotlib import pyplot as plt
# This generates 100 variables that could possibly be assigned to 5 clusters
n_variables = 100
n_clusters = 5
n_samples = 1000
# To keep this example simple, each cluster will have a fixed size
cluster_size = n_variables // n_clusters
# Assign each variable to a cluster
belongs_to_cluster = np.repeat(range(n_clusters), cluster_size)
np.random.shuffle(belongs_to_cluster)
# This latent data is used to make variables that belong
# to the same cluster correlated.
latent = np.random.randn(n_clusters, n_samples)
variables = []
for i in range(n_variables):
variables.append(
np.random.randn(n_samples) + latent[belongs_to_cluster[i], :]
)
variables = np.array(variables)
C = np.cov(variables)
def score(C):
'''
Function to assign a score to an ordered covariance matrix.
High correlations within a cluster improve the score.
High correlations between clusters decease the score.
'''
score = 0
for cluster in range(n_clusters):
inside_cluster = np.arange(cluster_size) + cluster * cluster_size
outside_cluster = np.setdiff1d(range(n_variables), inside_cluster)
# Belonging to the same cluster
score += np.sum(C[inside_cluster, :][:, inside_cluster])
# Belonging to different clusters
score -= np.sum(C[inside_cluster, :][:, outside_cluster])
score -= np.sum(C[outside_cluster, :][:, inside_cluster])
return score
initial_C = C
initial_score = score(C)
initial_ordering = np.arange(n_variables)
plt.figure()
plt.imshow(C, interpolation='nearest')
plt.title('Initial C')
print 'Initial ordering:', initial_ordering
print 'Initial covariance matrix score:', initial_score
# Pretty dumb greedy optimization algorithm that continuously
# swaps rows to improve the score
def swap_rows(C, var1, var2):
'''
Function to swap two rows in a covariance matrix,
updating the appropriate columns as well.
'''
D = C.copy()
D[var2, :] = C[var1, :]
D[var1, :] = C[var2, :]
E = D.copy()
E[:, var2] = D[:, var1]
E[:, var1] = D[:, var2]
return E
current_C = C
current_ordering = initial_ordering
current_score = initial_score
max_iter = 1000
for i in range(max_iter):
# Find the best row swap to make
best_C = current_C
best_ordering = current_ordering
best_score = current_score
for row1 in range(n_variables):
for row2 in range(n_variables):
if row1 == row2:
continue
option_ordering = best_ordering.copy()
option_ordering[row1] = best_ordering[row2]
option_ordering[row2] = best_ordering[row1]
option_C = swap_rows(best_C, row1, row2)
option_score = score(option_C)
if option_score > best_score:
best_C = option_C
best_ordering = option_ordering
best_score = option_score
if best_score > current_score:
# Perform the best row swap
current_C = best_C
current_ordering = best_ordering
current_score = best_score
else:
# No row swap found that improves the solution, we're done
break
# Output the result
plt.figure()
plt.imshow(current_C, interpolation='nearest')
plt.title('Best C')
print 'Best ordering:', current_ordering
print 'Best score:', current_score
print
print 'Cluster [variables assigned to this cluster]'
print '------------------------------------------------'
for cluster in range(n_clusters):
print 'Cluster %02d %s' % (cluster + 1, current_ordering[cluster*cluster_size:(cluster+1)*cluster_size]) | {
"source": [
"https://stats.stackexchange.com/questions/138332",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69318/"
]
} |
138,528 | I was reading a blog post by the statistician William Briggs, and the following claim interested me to say the least. What do you make of it? What is a confidence interval? It is an equation, of course, that will
provide you an interval for your data. It is meant to provide a
measure of the uncertainty of a parameter estimate. Now, strictly
according to frequentist theory—which we can even assume is true—the
only thing you can say about the CI you have in hand is that the true
value of the parameter lies within it or that it does not. This is a
tautology, therefore it is always true. Thus, the CI provides no
measure of uncertainty at all: in fact, it is a useless exercise to
compute one. Link: http://wmbriggs.com/post/3169/ | He's referring, rather clumsily, to the well known fact that frequentist analysis doesn't model the state of our knowledge about an unknown parameter with a probability distribution, so having calculated a (say 95%) confidence interval (say 1.2 to 3.4) for a population parameter (say the mean of a Gaussian distribution) from some data you can't then go ahead & claim that there's a 95% probability of the mean falling between 1.2 and 3.4. The probability's one or zero—you don't know which. But what you can say, in general, is that your procedure for calculating 95% confidence intervals is one that ensures they contain the true parameter value 95% of the time. This seems reason enough for saying that CIs reflect uncertainty. As Sir David Cox put it † We define procedures for assessing evidence that are calibrated by how
they would perform were they used repeatedly. In that sense they do
not differ from other measuring instruments. See here & here for further explanation. Other things you can say vary according to the particular method you used to calculate the confidence interval; if you ensure the values inside have greater likelihood, given the data, than the points outside, then you can say that (& it's often approximately true for commonly used methods). See here for more. † Cox (2006), Principles of Statistical Inference , §1.5.2 | {
"source": [
"https://stats.stackexchange.com/questions/138528",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69430/"
]
} |
138,569 | I understand what role lambda plays in an elastic-net regression. And I can understand why one would select lambda.min, the value of lambda that minimizes cross validated error. My question is Where in the statistics literature is it recommended to use lambda.1se, that is the value of lambda that minimizes CV error plus one standard error ? I can't seem to find a formal citation, or even a reason for why this is often a good value. I understand that it's a more restrictive regularization, and will shrink the parameters more towards zero, but I'm not always certain of the conditions under which lambda.1se is a better choice over lambda.min. Can someone help explain? | Friedman, Hastie, and Tibshirani (2010) , citing The Elements of Statistical Learning , write, We often use the “one-standard-error” rule when selecting the best model; this acknowledges the fact that the risk curves are estimated with error, so errs on the side of parsimony. The reason for using one standard error, as opposed to any other amount, seems to be because it's, well... standard. Krstajic, et al (2014) write (bold emphasis mine): Breiman et al. [25] have found in the case of selecting optimal
tree size for classification tree models that the tree size with minimal cross-validation error generates a model which generally overfits. Therefore, in Section 3.4.3 of their book Breiman et al. [25] define the one standard error rule (1 SE rule) for choosing an optimal tree size, and they implement it throughout the book. In order to calculate the standard error for single V-fold cross- validation, accuracy needs to be calculated for each fold, and the standard error is calculated from V accuracies from each fold. Hastie et al. [4] define the 1 SE rule as selecting the most parsimonious model whose error is no more than one standard error above the error of the best model, and they suggest in several places using the 1 SE rule for general cross-validation use. The main point of the 1 SE rule, with which we agree, is to choose the simplest model whose accuracy is comparable with the best model . The suggestion is that the choice of one standard error is entirely heuristic, based on the sense that one standard error typically is not large relative to the range of $\lambda$ values. | {
"source": [
"https://stats.stackexchange.com/questions/138569",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55549/"
]
} |
138,601 | I need to use bootstrap resampling to test the significant difference between two datasets (data1 & data2). I have already used bootstrap resampling to estimate the confidence interval of the mean for a single datase. However, I am absolutely lost when trying to use bootstrap resampling for testing the null hypothesis of whether any two datasets are different. To be more specific, the reasons for my confusion come from the following: 1- My two datasets are not numeric, they are a set of specific words (Rainy, Sunny, cloudy). I want to test the difference between data1 and data2 in predicting the weather. So my datasets would be somthing like: data1 data2 correct-prediction
day1 rainy rainy rainy
day2 cloudy sunny sunny
day3 cloudy rainy rainy How to convert these two datasets - data1 and data2 - to appropriate numeric data (vectors) for applying bootstrap resampling. 2- what is the mechanism of using bootstrap resampling for testing the null hypothesis of whether any two datasets are different. Should I resample each dataset (after converting it to appropriate numeric values) and compute the means for the bootstraps, then making the comparisons? | Friedman, Hastie, and Tibshirani (2010) , citing The Elements of Statistical Learning , write, We often use the “one-standard-error” rule when selecting the best model; this acknowledges the fact that the risk curves are estimated with error, so errs on the side of parsimony. The reason for using one standard error, as opposed to any other amount, seems to be because it's, well... standard. Krstajic, et al (2014) write (bold emphasis mine): Breiman et al. [25] have found in the case of selecting optimal
tree size for classification tree models that the tree size with minimal cross-validation error generates a model which generally overfits. Therefore, in Section 3.4.3 of their book Breiman et al. [25] define the one standard error rule (1 SE rule) for choosing an optimal tree size, and they implement it throughout the book. In order to calculate the standard error for single V-fold cross- validation, accuracy needs to be calculated for each fold, and the standard error is calculated from V accuracies from each fold. Hastie et al. [4] define the 1 SE rule as selecting the most parsimonious model whose error is no more than one standard error above the error of the best model, and they suggest in several places using the 1 SE rule for general cross-validation use. The main point of the 1 SE rule, with which we agree, is to choose the simplest model whose accuracy is comparable with the best model . The suggestion is that the choice of one standard error is entirely heuristic, based on the sense that one standard error typically is not large relative to the range of $\lambda$ values. | {
"source": [
"https://stats.stackexchange.com/questions/138601",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69466/"
]
} |
139,042 | I am trying to solve the regression task. I found out that 3 models are working nicely for different subsets of data: LassoLARS, SVR and Gradient Tree Boosting. I noticed that when I make predictions using all these 3 models and then make a table of 'true output' and outputs of my 3 models I see that each time at least one of the models is really close to the true output, though 2 others could be relatively far away. When I compute minimal possible error (if I take prediction from 'best' predictor for each test example) I get a error which is much smaller than error of any model alone. So I thought about trying to combine predictions from these 3 diffent models into some kind of ensemble. Question is, how to do this properly? All my 3 models are build and tuned using scikit-learn, does it provide some kind of a method which could be used to pack models into ensemble? The problem here is that I don't want to just average predictions from all three models, I want to do this with weighting, where weighting should be determined based on properties of specific example. Even if scikit-learn not provides such functionality, it would be nice if someone knows how to property address this task - of figuring out the weighting of each model for each example in data. I think that it might be done by a separate regressor built on top of all these 3 models, which will try output optimal weights for each of 3 models, but I am not sure if this is the best way of doing this. | Actually, scikit-learn does provide such a functionality, though it might be a bit tricky to implement. Here is a complete working example of such an average regressor built on top of three models. First of all, let's import all the required packages: from sklearn.base import TransformerMixin
from sklearn.datasets import make_regression
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import LinearRegression, Ridge Then, we need to convert our three regressor models into transformers. This will allow us to merge their predictions into a single feature vector using FeatureUnion : class RidgeTransformer(Ridge, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
class RandomForestTransformer(RandomForestRegressor, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1)
class KNeighborsTransformer(KNeighborsRegressor, TransformerMixin):
def transform(self, X, *_):
return self.predict(X).reshape(len(X), -1) Now, let's define a builder function for our frankenstein model: def build_model():
ridge_transformer = Pipeline(steps=[
('scaler', StandardScaler()),
('poly_feats', PolynomialFeatures()),
('ridge', RidgeTransformer())
])
pred_union = FeatureUnion(
transformer_list=[
('ridge', ridge_transformer),
('rand_forest', RandomForestTransformer()),
('knn', KNeighborsTransformer())
],
n_jobs=2
)
model = Pipeline(steps=[
('pred_union', pred_union),
('lin_regr', LinearRegression())
])
return model Finally, let's fit the model: print('Build and fit a model...')
model = build_model()
X, y = make_regression(n_features=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
model.fit(X_train, y_train)
score = model.score(X_test, y_test)
print('Done. Score:', score) Output: Build and fit a model...
Done. Score: 0.9600413867438636 Why bother complicating things in such a way? Well, this approach allows us to optimize model hyperparameters using standard scikit-learn modules such as GridSearchCV or RandomizedSearchCV . Also, now it is possible to easily save and load from disk a pre-trained model. | {
"source": [
"https://stats.stackexchange.com/questions/139042",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68345/"
]
} |
139,047 | I observe a very weird behaviour in the SVD outcome of random data, which I can reproduce in both Matlab and R. It looks like some numerical issue in the LAPACK library; is it? I draw $n=1000$ samples from the $k=2$ dimensional Gaussian with zero mean and identity covariance: $X\sim \mathcal N (0, \mathbf I)$. I assemble them in a $1000 \times 2$ data matrix $\mathbf X$. (I can optionally center $\mathbf X$ or not, it does not influence the following.) Then I perform singular value decomposition (SVD) to get $\mathbf X=\mathbf{USV}^\top$. Let's take some two particular elements of $\mathbf U$, e.g. $U_{11}$ and $U_{22}$, and ask what is the correlation between them across different draws of $\mathbf X$. I would expect that if the number $N_\mathrm{rep}$ of draws is reasonably big, then all such correlations should be around zero (i.e. population correlations should be zero, and sample correlations will be small). However, I observe some weirdly strong correlations (around $\pm0.2$) between $U_{11}$, $U_{12}$, $U_{21}$, and $U_{22}$, and only between these elements. All other pairs of elements have correlations around zero, as expected. Here is how the correlation matrix for the $20$ "upper" elements of $\mathbf U$ looks like (first $10$ elements of the first column, then the first $10$ elements of the second column): Notice strangely high values in the upper-left corners of each quadrant. It was this @whuber's comment that brought this effect to my attention. @whuber argued that the PC1 and PC2 are not independent and presented this strong correlation as an evidence for that. However, my impression is that he accidentally discovered a numerical bug in LAPACK library. What is going on here? Here is @whuber's R code: stat <- function(x) {u <- svd(x)$u; c(u[1,1], u[2, 2])};
Sigma <- matrix(c(1,0,0,1), 2);
sim <- t(replicate(1e3, stat(MASS::mvrnorm(10, c(0,0), Sigma))));
cor.test(sim[,1], sim[,2]); Here is my Matlab code: clear all
rng(7)
n = 1000; %// Number of variables
k = 2; %// Number of observations
Nrep = 1000; %// Number of iterations (draws)
for rep = 1:Nrep
X = randn(n,k);
%// X = bsxfun(@minus, X, mean(X));
[U,S,V] = svd(X,0);
t(rep,:) = [U(1:10,1)' U(1:10,2)'];
end
figure
imagesc(corr(t), [-.5 .5])
axis square
hold on
plot(xlim, [10.5 10.5], 'k')
plot([10.5 10.5], ylim, 'k') | This is not a bug. As we have explored (extensively) in the comments, there are two things happening. The first is that the columns of $U$ are constrained to meet the SVD requirements: each must have unit length and be orthogonal to all the others. Viewing $U$ as a random variable created from a random matrix $X$ via a particular SVD algorithm, we thereby note that these $k(k+1)/2$ functionally independent constraints create statistical dependencies among the columns of $U$ . These dependencies might be revealed to a greater or lesser extent by studying the correlations among the components of $U$ , but a second phenomenon emerges : the SVD solution is not unique. At a minimum, each column of $U$ can be independently negated, giving at least $2^k$ distinct solutions with $k$ columns. Strong correlations (exceeding $1/2$ ) can be induced by changing the signs of the columns appropriately. (One way to do this is given in my first comment to Amoeba's answer in this thread: I force all the $u_{ii},i=1,\ldots, k$ to have the same sign, making them all negative or all positive with equal probability.) On the other hand, all correlations can be made to vanish by choosing the signs randomly, independently, with equal probabilities. (I give an example below in the "Edit" section.) With care, we can partially discern both these phenomena when reading scatterplot matrices of the components of $U$ . Certain characteristics--such as the appearance of points nearly uniformly distributed within well-defined circular regions--belie a lack of independence. Others, such as scatterplots showing clear nonzero correlations, obviously depend on choices made in the algorithm-- but such choices are possible only because of the lack of independence in the first place. The ultimate test of a decomposition algorithm like SVD (or Cholesky, LR, LU, etc.) is whether it does what it claims. In this circumstance it suffices to check that when SVD returns the triple of matrices $(U, D, V)$ , that $X$ is recovered, up to anticipated floating point error, by the product $UDV^\prime$ ; that the columns of $U$ and of $V$ are orthonormal; and that $D$ is diagonal, its diagonal elements are non-negative, and are arranged in descending order. I have applied such tests to the svd algorithm in R and have never found it to be in error. Although that is no assurance it is perfectly correct, such experience--which I believe is shared by a great many people--suggests that any bug would require some extraordinary kind of input in order to be manifest. What follows is a more detailed analysis of specific points raised in the question. Using R 's svd procedure, first you can check that as $k$ increases, the correlations among the coefficients of $U$ grow weaker, but they are still nonzero. If you simply were to perform a larger simulation, you would find they are significant. (When $k=3$ , 50000 iterations ought to suffice.) Contrary to the assertion in the question, the correlations do not "disappear entirely." Second, a better way to study this phenomenon is to go back to the basic question of independence of the coefficients. Although the correlations tend to be near zero in most cases, the lack of independence is clearly evident. This is made most apparent by studying the full multivariate distribution of the coefficients of $U$ . The nature of the distribution emerges even in small simulations in which the nonzero correlations cannot (yet) be detected. For instance, examine a scatterplot matrix of the coefficients. To make this practicable, I set the size of each simulated dataset to $4$ and kept $k=2$ , thereby drawing $1000$ realizations of the $4\times 2$ matrix $U$ , creating a $1000\times 8$ matrix. Here is its full scatterplot matrix, with the variables listed by their positions within $U$ : Scanning down the first column reveals an interesting lack of independence between $u_{11}$ and the other $u_{ij}$ : look at how the upper quadrant of the scatterplot with $u_{21}$ is nearly vacant, for instance; or examine the elliptical upward-sloping cloud describing the $(u_{11}, u_{22})$ relationship and the downward-sloping cloud for the $(u_{21}, u_{12})$ pair. A close look reveals a clear lack of independence among almost all of these coefficients: very few of them look remotely independent, even though most of them exhibit near-zero correlation. (NB: Most of the circular clouds are projections from a hypersphere created by the normalization condition forcing the sum of squares of all components of each column to be unity.) Scatterplot matrices with $k=3$ and $k=4$ exhibit similar patterns: these phenomena are not confined to $k=2$ , nor do they depend on the size of each simulated dataset: they just get more difficult to generate and examine. The explanations for these patterns go to the algorithm used to obtain $U$ in the singular value decomposition, but we know such patterns of non-independence must exist by the very defining properties of $U$ : since each successive column is (geometrically) orthogonal to the preceding ones, these orthogonality conditions impose functional dependencies among the coefficients, which thereby translate to statistical dependencies among the corresponding random variables. Edit In response to comments, it may be worth remarking on the extent to which these dependence phenomena reflect the underlying algorithm (to compute an SVD) and how much they are inherent in the nature of the process. The specific patterns of correlations among coefficients depend a great deal on arbitrary choices made by the SVD algorithm, because the solution is not unique: the columns of $U$ may always independently be multiplied by $-1$ or $1$ . There is no intrinsic way to choose the sign. Thus, when two SVD algorithms make different (arbitrary or perhaps even random) choices of sign, they can result in different patterns of scatterplots of the $(u_{ij}, u_{i^\prime j^\prime})$ values. If you would like to see this, replace the stat function in the code below by stat <- function(x) {
i <- sample.int(dim(x)[1]) # Make a random permutation of the rows of x
u <- svd(x[i, ])$u # Perform SVD
as.vector(u[order(i), ]) # Unpermute the rows of u
} This first randomly re-orders the observations x , performs SVD, then applies the inverse ordering to u to match the original observation sequence. Because the effect is to form mixtures of reflected and rotated versions of the original scatterplots, the scatterplots in the matrix will look much more uniform. All sample correlations will be extremely close to zero (by construction: the underlying correlations are exactly zero). Nevertheless, the lack of independence will still be obvious (in the uniform circular shapes that appear, particularly between $u_{i,j}$ and $u_{i,j^\prime}$ ). The lack of data in some quadrants of some of the original scatterplots (shown in the figure above) arises from how the R SVD algorithm selects signs for the columns. Nothing changes about the conclusions. Because the second column of $U$ is orthogonal to the first, it (considered as a multivariate random variable) is dependent on the first (also considered as a multivariate random variable). You cannot have all the components of one column be independent of all the components of the other; all you can do is to look at the data in ways that obscure the dependencies--but the dependence will persist. Here is updated R code to handle the cases $k\gt 2$ and draw a portion of the scatterplot matrix. k <- 2 # Number of variables
p <- 4 # Number of observations
n <- 1e3 # Number of iterations
stat <- function(x) as.vector(svd(x)$u)
Sigma <- diag(1, k, k); Mu <- rep(0, k)
set.seed(17)
sim <- t(replicate(n, stat(MASS::mvrnorm(p, Mu, Sigma))))
colnames(sim) <- as.vector(outer(1:p, 1:k, function(i,j) paste0(i,",",j)))
pairs(sim[, 1:min(11, p*k)], pch=".") | {
"source": [
"https://stats.stackexchange.com/questions/139047",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28666/"
]
} |
139,070 | I have posted this question, not sure how to move that question to this stats.stackexchange.com. https://stackoverflow.com/questions/28702634/predictive-accuracy-formula-in-excel-or-r?noredirect=1#comment45695509_28702634 Recently, I have built a model and I have the output similar to this below. The output is in the Excel. I am trying to understand if prediction error can be calculated in the Excel? For example, could I write a formula (RMSE, SSE, MSE) in excel to determine the predictive accuracy for the table below? Basically, can I calculate the prediction error for the predicted sales given actual sales? pred_minus_acutal and (predicted - actual)/actual are just scenarios I tried. That may not be right way to get the prediction error. I am not sure if these ideas are right either. Thanks! customer id predicted_sales actual_sales
1A 100 150
2A 200 100
3A 300 256
1B 100 300
4B 400 390
6B 500 502 | This is not a bug. As we have explored (extensively) in the comments, there are two things happening. The first is that the columns of $U$ are constrained to meet the SVD requirements: each must have unit length and be orthogonal to all the others. Viewing $U$ as a random variable created from a random matrix $X$ via a particular SVD algorithm, we thereby note that these $k(k+1)/2$ functionally independent constraints create statistical dependencies among the columns of $U$ . These dependencies might be revealed to a greater or lesser extent by studying the correlations among the components of $U$ , but a second phenomenon emerges : the SVD solution is not unique. At a minimum, each column of $U$ can be independently negated, giving at least $2^k$ distinct solutions with $k$ columns. Strong correlations (exceeding $1/2$ ) can be induced by changing the signs of the columns appropriately. (One way to do this is given in my first comment to Amoeba's answer in this thread: I force all the $u_{ii},i=1,\ldots, k$ to have the same sign, making them all negative or all positive with equal probability.) On the other hand, all correlations can be made to vanish by choosing the signs randomly, independently, with equal probabilities. (I give an example below in the "Edit" section.) With care, we can partially discern both these phenomena when reading scatterplot matrices of the components of $U$ . Certain characteristics--such as the appearance of points nearly uniformly distributed within well-defined circular regions--belie a lack of independence. Others, such as scatterplots showing clear nonzero correlations, obviously depend on choices made in the algorithm-- but such choices are possible only because of the lack of independence in the first place. The ultimate test of a decomposition algorithm like SVD (or Cholesky, LR, LU, etc.) is whether it does what it claims. In this circumstance it suffices to check that when SVD returns the triple of matrices $(U, D, V)$ , that $X$ is recovered, up to anticipated floating point error, by the product $UDV^\prime$ ; that the columns of $U$ and of $V$ are orthonormal; and that $D$ is diagonal, its diagonal elements are non-negative, and are arranged in descending order. I have applied such tests to the svd algorithm in R and have never found it to be in error. Although that is no assurance it is perfectly correct, such experience--which I believe is shared by a great many people--suggests that any bug would require some extraordinary kind of input in order to be manifest. What follows is a more detailed analysis of specific points raised in the question. Using R 's svd procedure, first you can check that as $k$ increases, the correlations among the coefficients of $U$ grow weaker, but they are still nonzero. If you simply were to perform a larger simulation, you would find they are significant. (When $k=3$ , 50000 iterations ought to suffice.) Contrary to the assertion in the question, the correlations do not "disappear entirely." Second, a better way to study this phenomenon is to go back to the basic question of independence of the coefficients. Although the correlations tend to be near zero in most cases, the lack of independence is clearly evident. This is made most apparent by studying the full multivariate distribution of the coefficients of $U$ . The nature of the distribution emerges even in small simulations in which the nonzero correlations cannot (yet) be detected. For instance, examine a scatterplot matrix of the coefficients. To make this practicable, I set the size of each simulated dataset to $4$ and kept $k=2$ , thereby drawing $1000$ realizations of the $4\times 2$ matrix $U$ , creating a $1000\times 8$ matrix. Here is its full scatterplot matrix, with the variables listed by their positions within $U$ : Scanning down the first column reveals an interesting lack of independence between $u_{11}$ and the other $u_{ij}$ : look at how the upper quadrant of the scatterplot with $u_{21}$ is nearly vacant, for instance; or examine the elliptical upward-sloping cloud describing the $(u_{11}, u_{22})$ relationship and the downward-sloping cloud for the $(u_{21}, u_{12})$ pair. A close look reveals a clear lack of independence among almost all of these coefficients: very few of them look remotely independent, even though most of them exhibit near-zero correlation. (NB: Most of the circular clouds are projections from a hypersphere created by the normalization condition forcing the sum of squares of all components of each column to be unity.) Scatterplot matrices with $k=3$ and $k=4$ exhibit similar patterns: these phenomena are not confined to $k=2$ , nor do they depend on the size of each simulated dataset: they just get more difficult to generate and examine. The explanations for these patterns go to the algorithm used to obtain $U$ in the singular value decomposition, but we know such patterns of non-independence must exist by the very defining properties of $U$ : since each successive column is (geometrically) orthogonal to the preceding ones, these orthogonality conditions impose functional dependencies among the coefficients, which thereby translate to statistical dependencies among the corresponding random variables. Edit In response to comments, it may be worth remarking on the extent to which these dependence phenomena reflect the underlying algorithm (to compute an SVD) and how much they are inherent in the nature of the process. The specific patterns of correlations among coefficients depend a great deal on arbitrary choices made by the SVD algorithm, because the solution is not unique: the columns of $U$ may always independently be multiplied by $-1$ or $1$ . There is no intrinsic way to choose the sign. Thus, when two SVD algorithms make different (arbitrary or perhaps even random) choices of sign, they can result in different patterns of scatterplots of the $(u_{ij}, u_{i^\prime j^\prime})$ values. If you would like to see this, replace the stat function in the code below by stat <- function(x) {
i <- sample.int(dim(x)[1]) # Make a random permutation of the rows of x
u <- svd(x[i, ])$u # Perform SVD
as.vector(u[order(i), ]) # Unpermute the rows of u
} This first randomly re-orders the observations x , performs SVD, then applies the inverse ordering to u to match the original observation sequence. Because the effect is to form mixtures of reflected and rotated versions of the original scatterplots, the scatterplots in the matrix will look much more uniform. All sample correlations will be extremely close to zero (by construction: the underlying correlations are exactly zero). Nevertheless, the lack of independence will still be obvious (in the uniform circular shapes that appear, particularly between $u_{i,j}$ and $u_{i,j^\prime}$ ). The lack of data in some quadrants of some of the original scatterplots (shown in the figure above) arises from how the R SVD algorithm selects signs for the columns. Nothing changes about the conclusions. Because the second column of $U$ is orthogonal to the first, it (considered as a multivariate random variable) is dependent on the first (also considered as a multivariate random variable). You cannot have all the components of one column be independent of all the components of the other; all you can do is to look at the data in ways that obscure the dependencies--but the dependence will persist. Here is updated R code to handle the cases $k\gt 2$ and draw a portion of the scatterplot matrix. k <- 2 # Number of variables
p <- 4 # Number of observations
n <- 1e3 # Number of iterations
stat <- function(x) as.vector(svd(x)$u)
Sigma <- diag(1, k, k); Mu <- rep(0, k)
set.seed(17)
sim <- t(replicate(n, stat(MASS::mvrnorm(p, Mu, Sigma))))
colnames(sim) <- as.vector(outer(1:p, 1:k, function(i,j) paste0(i,",",j)))
pairs(sim[, 1:min(11, p*k)], pch=".") | {
"source": [
"https://stats.stackexchange.com/questions/139070",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61839/"
]
} |
139,072 | Say I have two standard normal random variables $X_1$ and $X_2$ that are jointly
normal with correlation coefficient $r$. What is the distribution function of $\max(X_1, X_2)$? | According to Nadarajah and Kotz, 2008 , Exact Distribution of the Max/Min of Two Gaussian Random Variables , the PDF of $X = \max(X_1, X_2)$ appears to be $$f(x) = 2 \cdot \phi(x) \cdot \Phi\left( \frac{1 - r}{\sqrt{1 - r^2}} x\right),$$ where $\phi$ is the PDF and $\Phi$ is the CDF of the standard normal distribution. $\hskip2in$ | {
"source": [
"https://stats.stackexchange.com/questions/139072",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8086/"
]
} |
140,080 | I heard that partial correlations between random variables can be found by inverting the covariance matrix and taking appropriate cells from such resulting precision matrix (this fact is mentioned in http://en.wikipedia.org/wiki/Partial_correlation , but without a proof). Why is this the case? | When a multivariate random variable $(X_1,X_2,\ldots,X_n)$ has a nondegenerate covariance matrix $\mathbb{C} = (\gamma_{ij}) = (\text{Cov}(X_i,X_j))$, the set of all real linear combinations of the $X_i$ forms an $n$-dimensional real vector space with basis $E=(X_1,X_2,\ldots, X_n)$ and a non-degenerate inner product given by $$\langle X_i,X_j \rangle = \gamma_{ij}\ .$$ Its dual basis with respect to this inner product , $E^{*} = (X_1^{*},X_2^{*}, \ldots, X_n^{*})$, is uniquely defined by the relationships $$\langle X_i^{*}, X_j \rangle = \delta_{ij}\ ,$$ the Kronecker delta (equal to $1$ when $i=j$ and $0$ otherwise). The dual basis is of interest here because the partial correlation of $X_i$ and $X_j$ is obtained as the correlation between the part of $X_i$ that is left after projecting it into the space spanned by all the other vectors (let's simply call it its "residual", $X_{i\circ}$) and the comparable part of $X_j$, its residual $X_{j\circ}$. Yet $X_i^{*}$ is a vector that is orthogonal to all vectors besides $X_i$ and has positive inner product with $X_i$ whence $X_{i\circ}$ must be some non-negative multiple of $X_i^{*}$, and likewise for $X_j$. Let us therefore write $$X_{i\circ} = \lambda_i X_i^{*},\ X_{j\circ} = \lambda_j X_j^{*}$$ for positive real numbers $\lambda_i$ and $\lambda_j$. The partial correlation is the normalized dot product of the residuals, which is unchanged by rescaling: $$\rho_{ij\circ} = \frac{\langle X_{i\circ}, X_{j\circ} \rangle}{\sqrt{\langle X_{i\circ}, X_{i\circ} \rangle\langle X_{j\circ}, X_{j\circ} \rangle}} = \frac{\lambda_i\lambda_j\langle X_{i}^{*}, X_{j}^{*} \rangle}{\sqrt{\lambda_i^2\langle X_{i}^{*}, X_{i}^{*} \rangle\lambda_j^2\langle X_{j}^{*}, X_{j}^{*} \rangle}} = \frac{\langle X_{i}^{*}, X_{j}^{*} \rangle}{\sqrt{\langle X_{i}^{*}, X_{i}^{*} \rangle\langle X_{j}^{*}, X_{j}^{*} \rangle}}\ .$$ (In either case the partial correlation will be zero whenever the residuals are orthogonal, whether or not they are nonzero.) We need to find the inner products of dual basis elements. To this end, expand the dual basis elements in terms of the original basis $E$: $$X_i^{*} = \sum_{j=1}^n \beta_{ij} X_j\ .$$ Then by definition $$\delta_{ik} = \langle X_i^{*}, X_k \rangle = \sum_{j=1}^n \beta_{ij}\langle X_j, X_k \rangle = \sum_{j=1}^n \beta_{ij}\gamma_{jk}\ .$$ In matrix notation with $\mathbb{I} = (\delta_{ij})$ the identity matrix and $\mathbb{B} = (\beta_{ij})$ the change-of-basis matrix, this states $$\mathbb{I} = \mathbb{BC}\ .$$ That is, $\mathbb{B} = \mathbb{C}^{-1}$, which is exactly what the Wikipedia article is asserting. The previous formula for the partial correlation gives $$\rho_{ij\cdot} = \frac{\beta_{ij}}{\sqrt{\beta_{ii} \beta_{jj}}} = \frac{\mathbb{C}^{-1}_{ij}}{\sqrt{\mathbb{C}^{-1}_{ii} \mathbb{C}^{-1}_{jj}}}\ .$$ | {
"source": [
"https://stats.stackexchange.com/questions/140080",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/37953/"
]
} |
140,148 | I understand how an artificial neural network (ANN) , can be trained in a supervised manner using backpropogation to improve the fitting by decreasing the error in the predictions. I have heard that an ANN can be used for unsupervised learning but how can this be done without a cost function of some sort to guide the optimization stages? With k-means or the EM algorithm there is a function for which each iteration searches to increase. How can we do clustering with an ANN and what mechanism does it use
to group data points in the same locality? (and what extra capabilities are brought with adding more layers to it?) | Neural networks are widely used in unsupervised learning in order to learn better representations of the input data. For example, given a set of text documents, NN can learn a mapping from document to real-valued vector in such a way that resulting vectors are similar for documents with similar content, i.e. distance preserving. This can be achieved using, for example, auto-encoders - a model that is trained to reconstruct the original vector from a smaller representation (hidden layer activations) with reconstruction error (distance from the ID function) as cost function. This process doesn't give you clusters, but it creates meaningful representations that can be used for clustering. You could, for instance, run a clustering algorithm on the hidden layer's activations. Clustering: There are a number of different NN architectures specifically designed for clustering. The most widely known is probably self organizing maps. A SOM is a NN that has a set of neurons connected to form a topological grid (usually rectangular). When some pattern is presented to an SOM, the neuron with closest weight vector is considered a winner and its weights are adapted to the pattern, as well as the weights of its neighbourhood. In this way an SOM naturally finds data clusters. A somewhat related algorithm is growing neural gas (it is not limited to predefined number of neurons). Another approach is Adaptive Resonance Theory where we have two layers: "comparison field" and "recognition field". Recognition field also determines the best match (neuron) to the vector transferred from the comparison field and also have lateral inhibitory connections. Implementation details and exact equations can readily found by googling the names of these models, so I won't put them here. | {
"source": [
"https://stats.stackexchange.com/questions/140148",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/1098/"
]
} |
140,536 | I'm surprised this hasn't been asked before, but I cannot find the question on stats.stackexchange. This is the formula to calculate the variance of a normally distributed sample: $$\frac{\sum(X - \bar{X}) ^2}{n-1}$$ This is the formula to calculate the mean squared error of observations in a simple linear regression: $$\frac{\sum(y_i - \hat{y}_i) ^2}{n-2}$$ What's the difference between these two formulas? The only difference I can see is that MSE uses $n-2$. So if that's the only difference, why not refer to them as both the variance, but with different degrees of freedom? | The mean squared error as you have written it for OLS is hiding something: $$\frac{\sum_{i}^{n}(y_i - \hat{y}_i) ^2}{n-2} = \frac{\sum_{i}^{n}\left[y_i - \left(\hat{\beta}_{0} + \hat{\beta}_{x}x_{i}\right)\right] ^2}{n-2}$$ Notice that the numerator sums over a function of both $y$ and $x$ , so you lose a degree of freedom for each variable (or for each estimated parameter explaining one variable as a function of the other if your prefer), hence $n-2$ . In the formula for the sample variance, the numerator is a function of a single variable, so you lose just one degree of freedom in the denominator. However, you are on track in noticing that these are conceptually similar quantities. The sample variance measures the spread of the data around the sample mean (in squared units), while the MSE measures the vertical spread of the data around the sample regression line (in squared vertical units). | {
"source": [
"https://stats.stackexchange.com/questions/140536",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12492/"
]
} |
140,711 | I am using K-means to cluster my data and was looking for a way to suggest an "optimal" cluster number. Gap statistics seems to be a common way to find a good cluster number. For some reason it returns 1 as optimal cluster number, but when I look at the data it's obvious that there are 2 clusters: This is how I call gap in R: gap <- clusGap(data, FUN=kmeans, K.max=10, B=500)
with(gap, maxSE(Tab[,"gap"], Tab[,"SE.sim"], method="firstSEmax")) The result set: > Number of clusters (method 'firstSEmax', SE.factor=1): 1
logW E.logW gap SE.sim
[1,] 5.185578 5.085414 -0.1001632148 0.1102734
[2,] 4.438812 4.342562 -0.0962498606 0.1141643
[3,] 3.924028 3.884438 -0.0395891064 0.1231152
[4,] 3.564816 3.563931 -0.0008853886 0.1387907
[5,] 3.356504 3.327964 -0.0285393917 0.1486991
[6,] 3.245393 3.119016 -0.1263766015 0.1544081
[7,] 3.015978 2.914607 -0.1013708665 0.1815997
[8,] 2.812211 2.734495 -0.0777154881 0.1741944
[9,] 2.672545 2.561590 -0.1109558011 0.1775476
[10,] 2.656857 2.403220 -0.2536369287 0.1945162 Am I doing something wrong or does someone know a better way to get a good cluster number? | Clustering depends on scale , among other things. For discussions of this issue see ( inter alia ) When should you center and standardize data? and PCA on covariance or correlation? . Here are your data drawn with a 1:1 aspect ratio, revealing how much the scales of the two variables differ: To its right, the plot of the gap stats shows the statistics by number of clusters ($k$) with standard errors drawn with vertical segments and the optimal value of $k$ marked with a vertical dashed blue line. According to the clusGap help, The default method "firstSEmax" looks for the smallest $k$ such that its value $f(k)$ is not more than 1 standard error away from the first local maximum. Other methods behave similarly. This criterion does not cause any of the gap statistics to stand out, resulting in an estimate of $k=1$. Choice of scale depends on the application, but a reasonable default starting point is a measure of dispersion of the data, such as the MAD or standard deviation. This plot repeats the analysis after recentering to zero and rescaling to make a unit standard deviation for each component $a$ and $b$: The $k=2$ K-means solution is indicated by varying symbol type and color in the scatterplot of the data at left. Among the set $k\in\{1,2,3,4,5\}$, $k=2$ is clearly favored in the gap statistics plot at right: it is the first local maximum and the stats for smaller $k$ (that is, $k=1$) are significantly lower. Larger values of $k$ are likely overfit for such a small dataset, and none are significantly better than $k=2$. They are shown here only to illustrate the general method. Here is R code to produce these figures. The data approximately match those shown in the question. library(cluster)
xy <- matrix(c(29,391, 31,402, 31,380, 32.5,391, 32.5,360, 33,382, 33,371,
34,405, 34,400, 34.5,404, 36,343, 36,320, 36,303, 37,344,
38,358, 38,356, 38,351, 39,318, 40,322, 40, 341), ncol=2, byrow=TRUE)
colnames(xy) <- c("a", "b")
title <- "Raw data"
par(mfrow=c(1,2))
for (i in 1:2) {
#
# Estimate optimal cluster count and perform K-means with it.
#
gap <- clusGap(xy, kmeans, K.max=10, B=500)
k <- maxSE(gap$Tab[, "gap"], gap$Tab[, "SE.sim"], method="Tibs2001SEmax")
fit <- kmeans(xy, k)
#
# Plot the results.
#
pch <- ifelse(fit$cluster==1,24,16); col <- ifelse(fit$cluster==1,"Red", "Black")
plot(xy, asp=1, main=title, pch=pch, col=col)
plot(gap, main=paste("Gap stats,", title))
abline(v=k, lty=3, lwd=2, col="Blue")
#
# Prepare for the next step.
#
xy <- apply(xy, 2, scale)
title <- "Standardized data"
} | {
"source": [
"https://stats.stackexchange.com/questions/140711",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/66228/"
]
} |
141,087 | This question has puzzled me for a long time. I understand the use of 'log' in maximizing the likelihood so I am not asking about 'log'. My question is, since maximizing log likelihood is equivalent to minimizing "negative log likelihood" (NLL), why did we invent this NLL? Why don't we use the "positive likelihood" all the time? In what circumstances is NLL favored? I found a little explanation here. https://quantivity.wordpress.com/2011/05/23/why-minimize-negative-log-likelihood/ , and it seems to explain the obvious equivalence in depth, but does not solve my confusion. Any explanation will be appreciated. | This is an alternative answer: optimizers in statistical packages usually work by minimizing the result of a function. If your function gives the likelihood value first it's more convenient to use logarithm in order to decrease the value returned by likelihood function. Then, since the log likelihood and likelihood function have the same increasing or decreasing trend, you can minimize the negative log likelihood in order to actually perform the maximum likelihood estimate of the function you are testing. See for example the nlminb function in R here | {
"source": [
"https://stats.stackexchange.com/questions/141087",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/70786/"
]
} |
141,104 | Standard error of the mean (SEM) represents the accuracy of the mean. Here's my question/doubt.
Does higher the SEM mean higher the accuracy of the mean?
To be more precise, what indicates more accuracy of the mean? sem = 3.5 or sem = 1.5? This may seem odd but it keeps on pestering me for a very long time. Thank you in advance. | This is an alternative answer: optimizers in statistical packages usually work by minimizing the result of a function. If your function gives the likelihood value first it's more convenient to use logarithm in order to decrease the value returned by likelihood function. Then, since the log likelihood and likelihood function have the same increasing or decreasing trend, you can minimize the negative log likelihood in order to actually perform the maximum likelihood estimate of the function you are testing. See for example the nlminb function in R here | {
"source": [
"https://stats.stackexchange.com/questions/141104",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/70693/"
]
} |
141,427 | If we have 2 normal, uncorrelated random variables $X_1, X_2$ then we can create 2 correlated random variables with the formula $Y=\rho X_1+ \sqrt{1-\rho^2} X_2$ and then $Y$ will have a correlation $\rho$ with $X_1$. Can someone explain where this formula comes from? | Suppose you want to find a linear combination of $X_1$ and $X_2$ such that $$
\text{corr}(\alpha X_1 + \beta X_2, X_1) = \rho
$$ Notice that if you multiply both $\alpha$ and $\beta$ by the same (non-zero) constant, the correlation will not change. Thus, we're going to add a condition to preserve variance: $\text{var}(\alpha X_1 + \beta X_2) = \text{var}(X_1)$ This is equivalent to $$
\rho
= \frac{\text{cov}(\alpha X_1 + \beta X_2, X_1)}{\sqrt{\text{var}(\alpha X_1 + \beta X_2) \text{var}(X_1)}}
= \frac{\alpha \overbrace{\text{cov}(X_1, X_1)}^{=\text{var}(X_1)} + \overbrace{\beta \text{cov}(X_2, X_1)}^{=0}}{\sqrt{\text{var}(\alpha X_1 + \beta X_2) \text{var}(X_1)}} = \alpha \sqrt{\frac{\text{var}(X_1)}{\alpha^2 \text{var}(X_1) + \beta^2 \text{var}(X_2)}}
$$ Assuming both random variables have the same variance (this is a crucial assumption!) ($\text{var}(X_1) = \text{var}(X_2)$), we get $$
\rho \sqrt{\alpha^2 + \beta^2} = \alpha
$$ There are many solutions to this equation, so it's time to recall variance-preserving condition: $$
\text{var}(X_1)
= \text{var}(\alpha X_1 + \beta X_2)
= \alpha^2 \text{var}(X_1) + \beta^2 \text{var}(X_2)
\Rightarrow \alpha^2 + \beta^2 = 1
$$ And this leads us to $$
\alpha = \rho \\
\beta = \pm \sqrt{1-\rho^2}
$$ UPD . Regarding the second question: yes, this is known as whitening . | {
"source": [
"https://stats.stackexchange.com/questions/141427",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/71000/"
]
} |
141,619 | In my understanding, highly correlated variables won't cause multi-collinearity issues in random forest model (Please correct me if I'm wrong). However, on the other way, if I have too many variables containing similar information, will the model weight too much on this set rather than the others? For example, there are two sets of information (A,B) with the same predictive power. Variable $X_1$,$X_2$,...$X_{1000}$ all contain information A, and only Y contains information B. When random sampling variables, will most of the trees grow on information A, and as a result information B is not fully captured? | That is correct, but therefore in most of those sub-samplings where variable Y was available it would produce the best possible split. You may try to increase mtry, to make sure this happens more often. You may try either recursive correlation pruning, that is in turns to remove one of two variables whom together have the highest correlation. A sensible threshold to stop this pruning could be that any pair of correlations(pearson) is lower than $R^2<.7$ You may try recursive variable importance pruning, that is in turns to remove, e.g. 20% with lowest variable importance. Try e.g. rfcv from randomForest package. You may try some decomposition/aggregation of your redundant variables. | {
"source": [
"https://stats.stackexchange.com/questions/141619",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/42825/"
]
} |
141,864 | Suppose I am running a regression $Y \sim X$. Why by selecting top $k$ principle components of $X$, does the model retain its predictive power on $Y$? I understand that from dimensionality-reduction/feature-selection point of view, if $v_1, v_2, ... v_k$ are the eigenvectors of covariance matrix of $X$ with top $k$ eigenvalues, then $Xv_1, Xv_2 ... Xv_k$ are top $k$ principal components with maximum variances. We can thereby reduce the number of features to $k$ and retain most of the predictive power, as I understand it. But why do top $k$ components retain the predictive power on $Y$? If we talk about a general OLS $Y \sim Z$, there is no reason to suggest that if feature $Z_i$ has maximum variance, then $Z_i$ has the most predictive power on $Y$. Update after seeing comments: I guess I have seen tons of examples of using PCA for dimensionality reduction. I have been assuming that means the dimensions we are left with have the most predictive power. Otherwise what's the point of dimensionality reduction? | Indeed, there is no guarantee that top principal components (PCs) have more predictive power than the low-variance ones. Real-world examples can be found where this is not the case, and it is easy to construct an artificial example where e.g. only the smallest PC has any relation to $y$ at all. This topic was discussed a lot on our forum, and in the (unfortunate) absence of one clearly canonical thread, I can only give several links that together provide various real life as well as artificial examples: Low variance components in PCA, are they really just noise? Is there any way to test for it? Examples of PCA where PCs with low variance are "useful" How can a later principal component be significant predictor in a regression, when an earlier PC is not? How to use principal components analysis to select variables for regression? And the same topic, but in the context of classification: What can cause PCA to worsen results of a classifier? The first principal component does not separate classes, but other PCs do; how is that possible? However, in practice, top PCs often do often have more predictive power than the low-variance ones, and moreover, using only top PCs can yield better predictive power than using all PCs. In situations with a lot of predictors $p$ and relatively few data points $n$ (e.g. when $p \approx n$ or even $p>n$), ordinary regression will overfit and needs to be regularized. Principal component regression (PCR) can be seen as one way to regularize the regression and will tend to give superior results. Moreover, it is closely related to ridge regression, which is a standard way of shrinkage regularization. Whereas using ridge regression is usually a better idea, PCR will often behave reasonably well. See Why does shrinkage work? for the general discussion about bias-variance tradeoff and about how shrinkage can be beneficial. In a way, one can say that both ridge regression and PCR assume that most information about $y$ is contained in the large PCs of $X$, and this assumption is often warranted. See the later answer by @cbeleites (+1) for some discussion about why this assumption is often warranted (and also this newer thread: Is dimensionality reduction almost always useful for classification? for some further comments). Hastie et al. in The Elements of Statistical Learning (section 3.4.1) comment on this in the context of ridge regression: [T]he small singular values [...] correspond to directions in the column space of $\mathbf X$ having small variance, and ridge regression shrinks these directions the most. [...] Ridge regression protects against the potentially high variance
of gradients estimated in the short directions. The implicit assumption is
that the response will tend to vary most in the directions of high variance
of the inputs. This is often a reasonable assumption, since predictors are
often chosen for study because they vary with the response variable, but
need not hold in general. See my answers in the following threads for details: What is the advantage of reducing dimensionality of predictors for the purposes of regression? Relationship between ridge regression and PCA regression Does it make sense to combine PCA and LDA? Bottom line For high-dimensional problems, pre-processing with PCA (meaning reducing dimensionality and keeping only top PCs) can be seen as one way of regularization and will often improve the results of any subsequent analysis, be it a regression or a classification method. But there is no guarantee that this will work, and there are often better regularization approaches. | {
"source": [
"https://stats.stackexchange.com/questions/141864",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/8086/"
]
} |
142,215 | I've seen the other thread here but I don't think the answer satisfied the actual question. What I have continually read is that Naive Bayes is a linear classifier (ex: here ) (such that it draws a linear decision boundary) using the log odds demonstration. However, I simulated two Gaussian clouds and fitted a decision boundary and got the results as such (library e1071 in r, using naiveBayes()) As we can see, the decision boundary is non-linear. Is it trying to say that the parameters (conditional probabilities) are a linear combination in the log space rather than saying the classifier itself separates data linearly? | In general the naive Bayes classifier is not linear, but if the likelihood factors $p(x_i \mid c)$ are from exponential families , the naive Bayes classifier corresponds to a linear classifier in a particular feature space. Here is how to see this. You can write any naive Bayes classifier as* $$p(c = 1 \mid \mathbf{x}) = \sigma\left( \sum_i \log \frac{p(x_i \mid c = 1)}{p(x_i \mid c = 0)} + \log \frac{p(c = 1)}{p(c = 0)} \right),$$ where $\sigma$ is the logistic function . If $p(x_i \mid c)$ is from an exponential family, we can write it as $$p(x_i \mid c) = h_i(x_i)\exp\left(\mathbf{u}_{ic}^\top \phi_i(x_i) - A_i(\mathbf{u}_{ic})\right),$$ and hence $$p(c = 1 \mid \mathbf{x}) = \sigma\left( \sum_i \mathbf{w}_i^\top \phi_i(x_i) + b \right),$$ where \begin{align}
\mathbf{w}_i &= \mathbf{u}_{i1} - \mathbf{u}_{i0}, \\
b &= \log \frac{p(c = 1)}{p(c = 0)} - \sum_i \left( A_i(\mathbf{u}_{i1}) - A_i(\mathbf{u}_{i0}) \right).
\end{align} Note that this is similar to logistic regression – a linear classifier – in the feature space defined by the $\phi_i$. For more than two classes, we analogously get multinomial logistic (or softmax) regression . If $p(x_i \mid c)$ is Gaussian, then $\phi_i(x_i) = (x_i, x_i^2)$ and we should have
\begin{align}
w_{i1} &= \sigma_1^{-2}\mu_1 - \sigma_0^{-2}\mu_0, \\
w_{i2} &= 2\sigma_0^{-2} - 2\sigma_1^{-2}, \\
b_i &= \log \sigma_0 - \log \sigma_1,
\end{align} assuming $p(c = 1) = p(c = 0) = \frac{1}{2}$. *Here is how to derive this result: \begin{align}
p(c = 1 \mid \mathbf{x})
&= \frac{p(\mathbf{x} \mid c = 1) p(c = 1)}{p(\mathbf{x} \mid c = 1) p(c = 1) + p(\mathbf{x} \mid c = 0) p(c = 0)} \\
&= \frac{1}{1 + \frac{p(\mathbf{x} \mid c = 0) p(c = 0)}{p(\mathbf{x} \mid c = 1) p(c = 1)}} \\
&= \frac{1}{1 + \exp\left( -\log\frac{p(\mathbf{x} \mid c = 1) p(c = 1)}{p(\mathbf{x} \mid c = 0) p(c = 0)} \right)} \\
&= \sigma\left( \sum_i \log \frac{p(x_i \mid c = 1)}{p(x_i \mid c = 0)} + \log \frac{p(c = 1)}{p(c = 0)} \right)
\end{align} | {
"source": [
"https://stats.stackexchange.com/questions/142215",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/48858/"
]
} |
142,348 | I apologize in advance for the fact that I'm still coming up to speed on this. I'm trying to understand the pros and cons of using tanh (map -1 to 1) vs. sigmoid (map 0 to 1) for my neuron activation function. From my reading it sounded like a minor thing with marginal differences. In practice for my problems I find that the sigmoid is easier to train and strangely, the sigmoid appears to find general solution better. By this I mean that when the sigmoid version is done training it does well on the reference (untrained) data set, where the tanh version seems to be able to get the correct answers on training data while doing poorly on the reference. This is for the same network architecture. One intuition I have is that with the sigmoid, it's easier for a neuron to almost fully turn off, thus providing no input to subsequent layers. The tanh has a harder time here since it needs to perfectly cancel its inputs, else it always gives a value to the next layer. Maybe this intuition is wrong though. Long post. Bottom line, what's the trade, and should it make a big difference? | In Symon Haykin's "Neural Networks: A Comprehensive Foundation" book there is the following explanation from which I quote: For the learning time to be minimized, the use of non-zero mean inputs should be avoided. Now, insofar as the signal vector $\bf x$ applied to a neuron in the first hidden layer of a multilayer perceptron is concerned, it is easy to remove the mean from each element of $\bf x$ before its application to the network. But what about the signals applied to the neurons in the remaining hidden and output layers of the network? The answer to this question lies in the type of activation function used in the network. If the activation function is non-symmetric, as in the case of the sigmoid function, the output of each neuron is restricted to the interval $[0,1]$. Such a choice introduces a source of systematic bias for those neurons located beyond the first layer of the network. To overcome this problem we need to use an antisymmetric activation function such as the hyperbolic tangent function. With this latter choice, the output of each neuron is permitted to assume both positive and negative values in the interval $[-1,1]$, in which case it is likely for its mean to be zero. If the network connectivity is large, back-propagation learning with antisymmetric activation functions can yield faster convergence than a similar process with non-symmetric activation functions, for which there is also empirical evidence (LeCun et al. 1991). The cited reference is: Y. LeCun, I. Kanter, and S.A.Solla: "Second-order properties of error surfaces: learning time and generalization", Advances in Neural Information Processing Systems, vol. 3, pp. 918-924, 1991. Another interesting reference is the following: Y. LeCun, L. Bottou, G. Orr and K. Muller: " Efficient BackProp ", in Orr, G. and Muller K. (Eds), Neural Networks: Tricks of the trade, Springer, 1998 | {
"source": [
"https://stats.stackexchange.com/questions/142348",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/71468/"
]
} |
142,533 | I'm into epidemiology. I'm not a statistician but I try to perform the analyses myself, although I often encounter difficulties. I did my first analysis some 2 years ago. P values were included everywhere in my analyses (I simply did what other researchers were doing) from descriptive tables to regression analyses. Little by little, statisticians working in my apartment persuaded me to skip all (!) the p values, except from where I truly have a hypothesis. The problem is that p values are abundant in medical research publications.
It is conventional to include p values on far too many lines; descriptive data of means, medians or whatever usually go along with p values (students t-test, Chi-square etc). I've recently submitted a paper to a journal, and I refused (politely) to add p values to my "baseline" descriptive table. The paper was ultimately rejected. To exemplify, see the figure below; it is the descriptive table from the latest published article in a respected journal of internal medicine.: Statisticians are mostly (if not always) involved in the reviewing of these manuscripts. So a laymen like myself expects to not find any p values where there are no hypothesis. But they are abundant, but the reason for this remain elusive to me. I find it hard to believe that it is ignorance. I realize that this is a borderline statistical question. But I'm looking for the rationale behind this phenomenon. | Clearly I don't need to tell you what a p-value is, or why over-reliance on them is a problem; you apparently understand those things quite well enough already. With publishing, you have two competing pressures. The first - and one you should push for at every reasonable opportunity - is to do what makes sense. The second, ultimately, is the need to actually publish. There's little gain if nobody sees your fine efforts at reforming terrible practice. So instead of avoiding it altogether: do it as little of such pointless activity as you can get away with that still gets it published maybe include a mention of this recent Nature methods article [1] if you think it will help, or perhaps better one or more of the other references. It at least should help establish that there's some opposition to the primacy of p-values. consider other journals, if another would be suitable Is this the same in other disciplines? The problem of over-use of p-values occurs in a number of disciplines (this can even be a problem when there is some hypothesis), but is much less common in some than others. Some disciplines do have issues with p-value-itis, and the problems that causes can eventually lead to somewhat overblown reactions [2] (and to a smaller extent, [1], and at least in some places, a few of the others as well). I think there are a variety of reasons for it, but the over-reliance of p-values seems to acquire a momentum of its own - there's something about saying "significant" and rejecting a null that people seem to find very attractive; various disciplines (e.g. see [3][4][5][6][7][8][9][10][11]) have (with varying degrees of success) been fighting against the problem of over reliance on p-values (especially $\alpha$=0.05) for many years, and have made many different kinds of suggestions - not all of which I agree with, but I include a variety of views to give some sense of the different things people have had to say. Some of them advocate focusing on confidence intervals, some advocate looking at effect sizes, some advocate Bayesian methods, some smaller p-values, some just on avoiding using p-values in particular ways, and so on. There are many different views on what to do instead, but between them there's a lot of material on problems with relying on p-values, at least the way it's pretty commonly done. See those references for many further references in turn. This is just a sampling - many dozens more references can be found. A few authors give reasons why they think p-values are prevalent. Some of these references may be useful if you do want to argue the point with an editor. [1] Halsey L.G., Curran-Everett D., Vowler S.L. & Drummond G.B. (2015), "The fickle P value generates irreproducible results," Nature Methods 12 , 179–185 doi:10.1038/nmeth.3288 http://www.nature.com/nmeth/journal/v12/n3/abs/nmeth.3288.html [2] David Trafimow, D. and Marks, M. (2015), Editorial, Basic and Applied Social Psychology , 37 :1–2 http://www.tandfonline.com/loi/hbas20 DOI: 10.1080/01973533.2015.1012991 [3] Cohen, J. (1990), Things I have learned (so far), American Psychologist , 45 (12), 1304–1312. [4] Cohen, J. (1994), The earth is round (p < .05), American Psychologist , 49 (12), 997–1003. [5] Valen E. Johnson (2013), Revised standards for statistical evidence PNAS , vol. 110, no. 48, 19313–19317 http://www.pnas.org/content/110/48/19313.full.pdf [6] Kruschke J.K. (2010), What to believe: Bayesian methods for data analysis, Trends in cognitive sciences 14 (7), 293-300 [7] Ioannidis, J. (2005) Why Most Published Research Findings Are False, PLoS Med. Aug; 2(8): e124. doi: 10.1371/journal.pmed.0020124 [8] Gelman, A. (2013),
P Values and Statistical Practice, Epidemiology Vol. 24 , No. 1, January, 69-72 [9] Gelman, A. (2013), "The problem with p-values is how they're used", (Discussion of “In defense of P-values,” by Paul Murtaugh, for Ecology )
unpublished http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.300.9053 http://www.stat.columbia.edu/~gelman/research/unpublished/murtaugh2.pdf [10] Nuzzo R. (2014), Statistical errors: P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume, News and Comment, Nature , Vol. 506 (13), 150-152 [11] Wagenmakers E, (2007) A practical solution to the pervasive problems of p values, Psychonomic Bulletin & Review 14 (5), 779-804 | {
"source": [
"https://stats.stackexchange.com/questions/142533",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35413/"
]
} |
143,905 | In principal component analysis (PCA), we get eigenvectors (unit vectors) and eigenvalues. Now, let us define loadings as $$\text{Loadings} = \text{Eigenvectors} \cdot \sqrt{\text{Eigenvalues}}.$$ I know that eigenvectors are just directions and loadings (as defined above) also include variance along these directions. But for my better understanding, I would like to know where I should use loadings instead of eigenvectors? An example would be perfect! I have generally only seen people using eigenvectors but every once in a while they use loadings (as defined above) and then I am left feeling that I do not really understand the difference. | In PCA, you split covariance (or correlation) matrix into scale part (eigenvalues) and direction part (eigenvectors). You may then endow eigenvectors with the scale: loadings . So, loadings are thus become comparable by magnitude with the covariances/correlations observed between the variables, - because what had been drawn out from the variables' covariation now returns back - in the form of the covariation between the variables and the principal components. Actually, loadings are the covariances/correlations between the original variables and the unit-scaled components . This answer shows geometrically what loadings are and what are coefficients associating components with variables in PCA or factor analysis. Loadings : Help you interpret principal components or factors; Because they are the linear combination weights (coefficients) whereby unit-scaled components or factors define or "load" a variable . (Eigenvector is just a coefficient of orthogonal transformation or projection, it is devoid of "load" within its value. "Load" is (information of the amount of) variance, magnitude. PCs are extracted to explain variance of the variables. Eigenvalues are the variances of (= explained by) PCs. When we multiply eigenvector by sq.root of the eivenvalue we "load" the bare coefficient by the amount of variance. By that virtue we make the coefficient to be the measure of association , co-variability.) Loadings sometimes are "rotated" (e.g. varimax) afterwards to facilitate
interpretability ( see also ); It is loadings which "restore" the original covariance/correlation matrix (see also this thread discussing nuances of PCA and FA in that respect); While in PCA you can
compute values of components both from eigenvectors and loadings, in
factor analysis you compute factor scores out of loadings . And, above all, loading matrix is informative: its vertical sums of
squares are the eigenvalues, components' variances, and its
horizontal sums of squares are portions of the variables' variances
being "explained" by the components. Rescaled or standardized loading is the loading divided by the variable's st. deviation; it is the correlation. (If your PCA is correlation-based PCA, loading is equal to the rescaled one, because correlation-based PCA is the PCA on standardized variables.) Rescaled loading squared has the meaning of the contribution of a pr. component into a variable; if it is high (close to 1) the variable is well defined by that component alone. An example of computations done in PCA and FA for you to see . Eigenvectors are unit-scaled loadings; and they are the coefficients (the cosines) of orthogonal transformation (rotation) of variables into principal components or back. Therefore it is easy to compute the components' values (not standardized) with them. Besides that their usage is limited. Eigenvector value squared has the meaning of the contribution of a variable into a pr. component; if it is high (close to 1) the component is well defined by that variable alone. Although eigenvectors and loadings are simply two different ways to normalize coordinates of the same points representing columns (variables) of the data on a biplot , it is not a good idea to mix the two terms. This answer explained why. See also . | {
"source": [
"https://stats.stackexchange.com/questions/143905",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/71539/"
]
} |
143,907 | In case of perfect multicollinearity the predictor matrix is singular and therefore cannot be inverted . Under these circumstances, the ordinary least-squares estimator $\hat\beta=(\Bbb X'\Bbb X)^{-1}\Bbb X'\Bbb y$ does not exist (Wikipedia) . I can't visualize the situation. When does the situation of perfect multicollinearity occur ? In case of perfect multicollinearity, why is the predictor matrix singular ? Under these circumstances, why does the ordinary least-squares estimator $\hat\beta=(\Bbb X'\Bbb X)^{-1}\Bbb X'\Bbb y$ not exist ? | In PCA, you split covariance (or correlation) matrix into scale part (eigenvalues) and direction part (eigenvectors). You may then endow eigenvectors with the scale: loadings . So, loadings are thus become comparable by magnitude with the covariances/correlations observed between the variables, - because what had been drawn out from the variables' covariation now returns back - in the form of the covariation between the variables and the principal components. Actually, loadings are the covariances/correlations between the original variables and the unit-scaled components . This answer shows geometrically what loadings are and what are coefficients associating components with variables in PCA or factor analysis. Loadings : Help you interpret principal components or factors; Because they are the linear combination weights (coefficients) whereby unit-scaled components or factors define or "load" a variable . (Eigenvector is just a coefficient of orthogonal transformation or projection, it is devoid of "load" within its value. "Load" is (information of the amount of) variance, magnitude. PCs are extracted to explain variance of the variables. Eigenvalues are the variances of (= explained by) PCs. When we multiply eigenvector by sq.root of the eivenvalue we "load" the bare coefficient by the amount of variance. By that virtue we make the coefficient to be the measure of association , co-variability.) Loadings sometimes are "rotated" (e.g. varimax) afterwards to facilitate
interpretability ( see also ); It is loadings which "restore" the original covariance/correlation matrix (see also this thread discussing nuances of PCA and FA in that respect); While in PCA you can
compute values of components both from eigenvectors and loadings, in
factor analysis you compute factor scores out of loadings . And, above all, loading matrix is informative: its vertical sums of
squares are the eigenvalues, components' variances, and its
horizontal sums of squares are portions of the variables' variances
being "explained" by the components. Rescaled or standardized loading is the loading divided by the variable's st. deviation; it is the correlation. (If your PCA is correlation-based PCA, loading is equal to the rescaled one, because correlation-based PCA is the PCA on standardized variables.) Rescaled loading squared has the meaning of the contribution of a pr. component into a variable; if it is high (close to 1) the variable is well defined by that component alone. An example of computations done in PCA and FA for you to see . Eigenvectors are unit-scaled loadings; and they are the coefficients (the cosines) of orthogonal transformation (rotation) of variables into principal components or back. Therefore it is easy to compute the components' values (not standardized) with them. Besides that their usage is limited. Eigenvector value squared has the meaning of the contribution of a variable into a pr. component; if it is high (close to 1) the component is well defined by that variable alone. Although eigenvectors and loadings are simply two different ways to normalize coordinates of the same points representing columns (variables) of the data on a biplot , it is not a good idea to mix the two terms. This answer explained why. See also . | {
"source": [
"https://stats.stackexchange.com/questions/143907",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26792/"
]
} |
144,041 | In the book Statistical Models and Methods for Lifetime Data , it is written : Censoring: When an observation is incomplete due to some random cause. Truncation: When the incomplete nature of the observation is due to a systematic selection process inherent to the study design. What is meant by "systematic selection process inherent to the study design" in the definition of truncation? What is the difference between censoring and truncation? | Definitions vary, and the two terms are sometimes used interchangeably. I'll try to explain the most common uses using the following data set:
$$ 1\qquad 1.25\qquad 2\qquad 4 \qquad 5$$ Censoring : some observations will be censored, meaning that we only know that they are below (or above) some bound. This can for instance occur if we measure the concentration of a chemical in a water sample. If the concentration is too low, the laboratory equipment cannot detect the presence of the chemical. It may still be present though, so we only know that the concentration is below the laboratory's detection limit. If the detection limit is 1.5, so that observations that fall below this limit is censored, our example data set would become:
$$ <1.5\qquad <1.5\qquad 2\qquad 4 \qquad 5,$$
that is, we don't know the actual values of the first two observations, but only that they are smaller than 1.5. Truncation : the process generating the data is such that it only is possible to observe outcomes above (or below) the truncation limit. This can for instance occur if measurements are taken using a detector which only is activated if the signals it detects are above a certain limit. There may be lots of weak incoming signals, but we can never tell using this detector. If the truncation limit is 1.5, our example data set would become
$$2\qquad 4 \qquad 5$$
and we would not know that there in fact were two signals which were not recorded. | {
"source": [
"https://stats.stackexchange.com/questions/144041",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/26792/"
]
} |
144,121 | I want to perform logistic regression with the following binomial response and with $X_1$ and $X_2$ as my predictors. I can present the same data as Bernoulli responses in the following format. The logistic regression outputs for these 2 data sets are mostly the same. The deviance residuals and AIC are different. (The difference between the null deviance and the residual deviance is the same in both cases - 0.228.) The following are the regression outputs from R. The data sets are called binom.data and bern.data. Here is the binomial output. Call:
glm(formula = cbind(Successes, Trials - Successes) ~ X1 + X2,
family = binomial, data = binom.data)
Deviance Residuals:
[1] 0 0 0
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.9649 21.6072 -0.137 0.891
X1Yes -0.1897 2.5290 -0.075 0.940
X2 0.3596 1.9094 0.188 0.851
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 2.2846e-01 on 2 degrees of freedom
Residual deviance: -4.9328e-32 on 0 degrees of freedom
AIC: 11.473
Number of Fisher Scoring iterations: 4
Here is the Bernoulli output.
Call:
glm(formula = Success ~ X1 + X2, family = binomial,
data = bern.data)
Deviance Residuals:
Min 1Q Median 3Q Max
-1.6651 -1.3537 0.7585 0.9281 1.0108
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.9649 21.6072 -0.137 0.891
X1Yes -0.1897 2.5290 -0.075 0.940
X2 0.3596 1.9094 0.188 0.851
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 15.276 on 11 degrees of freedom
Residual deviance: 15.048 on 9 degrees of freedom
AIC: 21.048
Number of Fisher Scoring iterations: 4 My questions: I can see that the point estimates and standard errors between the 2 approaches are equivalent in this particular case. Is this equivalence true in general? How can the answer for Question #1 be justified mathematically? Why are the deviance residuals and AIC different? | 1) Yes. You can aggregate/de-aggregate (?) binomial data from individuals with the same covariates. This comes from the fact that the sufficient statistic for a binomial model is the total number of events for each covariate vector; and the Bernoulli is just a special case of the binomial. Intuitively, each Bernoulli trial that makes up a binomial outcome is independent, so there shouldn't be a difference between counting these as a single outcome or as separate individual trials. 2) Say we have $n$ unique covariate vectors $x_1, x_2, \ldots, x_n$, each of which has a binomial outcome on $N_i$ trials, i.e.
$$Y_i \sim \mathrm{Bin}(N_i, p_i)$$
You've specified a logistic regression model, so
$$\mathrm{logit}(p_i) = \sum_{k=1}^K \beta_k x_{ik}$$
although we'll see later that this isn't important. The log-likelihood for this model is
$$\ell(\beta; Y) = \sum_{i=1}^n \log {N_i \choose Y_i} + Y_i \log(p_i) + (N_i - Y_i) \log(1-p_i)$$
and we maximise this with respect to $\beta$ (in the $p_i$ terms) to get our parameter estimates. Now, consider that for each $i = 1, \ldots, n$, we split the binomial outcome into $N_i$ individual Bernoulli/binary outcomes, as you have done. Specifically, create
$$Z_{i1}, \ldots, Z_{iY_i} = 1$$
$$Z_{i(Y_i+1)}, \ldots, Z_{iN_i} = 0$$
That is, the first $Y_i$ are 1s and the rest are 0s. This is exactly what you did - but you could equally have done the first $(N_i - Y_i)$ as 0s and the rest as 1s, or any other ordering, right? Your second model says that
$$Z_{ij} \sim \mathrm{Bernoulli}(p_i)$$
with the same regression model for $p_i$ as above. The log-likelihood for this model is
$$ \ell(\beta; Z) = \sum_{i=1}^n \sum_{j=1}^{N_i} Z_{ij}\log(p_i) + (1-Z_{ij})\log(1-p_i) $$
and because of the way we defined our $Z_{ij}$s, this can be simplified to
$$ \ell(\beta; Y) = \sum_{i=1}^n Y_i \log(p_i) + (N_i - Y_i)\log(1-p_i) $$
which should look pretty familiar. To get the estimates in the second model, we maximise this with respect to $\beta$. The only difference between this and the first log-likelihood is the term $\log {N_i \choose Y_i}$, which is constant with respect to $\beta$, and so doesn't affect the maximisation and we'll get the same estimates. 3) Each observation has a deviance residual. In the binomial model, they are
$$ D_i = 2\left[Y_i \log \left( \frac{Y_i/N_i}{\hat{p}_i} \right) + (N_i-Y_i) \log \left( \frac{1-Y_i/N_i}{1-\hat{p}_i} \right)\right] $$
where $\hat{p}_i$ is the estimated probability from your model. Note that your binomial model is saturated (0 residual degrees of freedom) and has perfect fit: $\hat{p}_i = Y_i/N_i$ for all observations, so $D_i = 0$ for all $i$. In the Bernoulli model,
$$ D_{ij} = 2\left[Z_{ij} \log \left( \frac{Z_{ij}}{\hat{p}_i} \right) + (1-Z_{ij}) \log \left(\frac{1-Z_{ij}}{1-\hat{p}_i} \right)\right] $$
Apart from the fact that you will now have $\sum_{i=1}^n N_i$ deviance residuals (instead of $n$ as with the binomial data), these will each be either
$$D_{ij} = -2\log(\hat{p}_i)$$
or
$$D_{ij} = -2\log(1-\hat{p}_i)$$
depending on whether $Z_{ij} = 1$ or $0$, and are obviously not the same as the above. Even if you sum these over $j$ to get a sum of deviance residuals for each $i$, you don't get the same:
$$ D_i = \sum_{j=1}^{N_i} D_{ij} = 2\left[Y_i \log \left( \frac{1}{\hat{p}_i} \right) + (N_i-Y_i) \log \left( \frac{1}{1-\hat{p}_i} \right)\right] $$ The fact that the AIC is different (but the change in deviance is not) comes back to the constant term that was the difference between the log-likelihoods of the two models. When calculating the deviance, this is cancelled out because it is the same in all models based on the same data. The AIC is defined as
$$AIC = 2K - 2\ell$$
and that combinatorial term is the difference between the $\ell$s: $$AIC_{\mathrm{Bernoulli}} - AIC_{\mathrm{Binomial}} = 2\sum_{i=1}^n \log {N_i \choose Y_i} = 9.575$$ | {
"source": [
"https://stats.stackexchange.com/questions/144121",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/70932/"
]
} |
144,158 | I am trying to do time series analysis and am new to this field. I have daily count of an event from 2006-2009 and I want to fit a time series model to it. Here is the progress that I have made: timeSeriesObj = ts(x,start=c(2006,1,1),frequency=365.25)
plot.ts(timeSeriesObj) The resulting plot I get is: In order to verify whether there is seasonality and trend in the data or not, I follow the steps mentioned in this post : ets(x)
fit <- tbats(x)
seasonal <- !is.null(fit$seasonal)
seasonal and in Rob J Hyndman's blog : library(fma)
fit1 <- ets(x)
fit2 <- ets(x,model="ANN")
deviance <- 2*c(logLik(fit1) - logLik(fit2))
df <- attributes(logLik(fit1))$df - attributes(logLik(fit2))$df
#P value
1-pchisq(deviance,df) Both cases indicate that there is no seasonality. When I plot the ACF & PACF of the series, here is what I get: My questions are: Is this the way to handle daily time series data? This page suggests that I should be looking at both weekly and annual patterns but the approach is not clear to me. I do not know how to proceed once I have the ACF and PACF plots. Can I simply use the auto.arima function? fit <- arima(myts, order=c(p, d, q) *****Updated Auto.Arima results****** When i change the frequency of the data to 7 according to Rob Hyndman's comments here , auto.arima selects a seasonal ARIMA model and outputs: Series: timeSeriesObj
ARIMA(1,1,2)(1,0,1)[7]
Coefficients:
ar1 ma1 ma2 sar1 sma1
0.89 -1.7877 0.7892 0.9870 -0.9278
s.e. NaN NaN NaN 0.0061 0.0162
sigma^2 estimated as 21.72: log likelihood=-4319.23
AIC=8650.46 AICc=8650.52 BIC=8682.18 ******Updated Seasonality Check****** When I test seasonality with frequency 7, it outputs True but with seasonality 365.25, it outputs false. Is this enough to conclude a lack of yearly seasonality? timeSeriesObj = ts(x,start=c(2006,1,1),frequency=7)
fit <- tbats(timeSeriesObj)
seasonal <- !is.null(fit$seasonal)
seasonal returns: True while timeSeriesObj = ts(x,start=c(2006,1,1),frequency=365.25)
fit <- tbats(timeSeriesObj)
seasonal <- !is.null(fit$seasonal)
seasonal returns: False | Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth. You may also have yearly seasonality, although it's not obvious from your time series. Your best bet, given potentially multiple seasonalities, may be a tbats model, which explicitly models multiple types of seasonality. Load the forecast package: library(forecast) Your output from str(x) indicates that x does not yet carry information about potentially having multiple seasonalities. Look at ?tbats , and compare the output of str(taylor) . Assign the seasonalities: x.msts <- msts(x,seasonal.periods=c(7,365.25)) Now you can fit a tbats model. (Be patient, this may take a while.) model <- tbats(x.msts) Finally, you can forecast and plot: plot(forecast(model,h=100)) You should not use arima() or auto.arima() , since these can only handle a single type of seasonality: either weekly or yearly. Don't ask me what auto.arima() would do on your data. It may pick one of the seasonalities, or it may disregard them altogether. EDIT to answer additional questions from a comment: How can I check whether the data has a yearly seasonality or not? Can I create another series of total number of events per month and
use its ACF to decide this? Calculating a model on monthly data might be a possibility. Then you could, e.g., compare AICs between models with and without seasonality. However, I'd rather use a holdout sample to assess forecasting models. Hold out the last 100 data points. Fit a model with yearly and weekly seasonality to the rest of the data (like above), then fit one with only weekly seasonality, e.g., using auto.arima() on a ts with frequency=7 . Forecast using both models into the holdout period. Check which one has a lower error, using MAE, MSE or whatever is most relevant to your loss function. If there is little difference between errors, go with the simpler model; otherwise, use the one with the lower error. The proof of the pudding is in the eating, and the proof of the time series model is in the forecasting. To improve matters, don't use a single holdout sample (which may be misleading, given the uptick at the end of your series), but use rolling origin forecasts, which is also known as "time series cross-validation" . (I very much recommend that entire free online forecasting textbook . So Seasonal ARIMA models cannot usually handle multiple seasonalities? Is it a property of the model itself or is it just the
way the functions in R are written? Standard ARIMA models handle seasonality by seasonal differencing. For seasonal monthly data, you would not model the raw time series, but the time series of differences between March 2015 and March 2014, between February 2015 and February 2014 and so forth. (To get forecasts on the original scale, you'd of course need to undifference again.) There is no immediately obvious way to extend this idea to multiple seasonalities. Of course, you can do something using ARIMAX, e.g., by including monthly dummies to model the yearly seasonality, then model residuals using weekly seasonal ARIMA. If you want to do this in R, use ts(x,frequency=7) , create a matrix of monthly dummies and feed that into the xreg parameter of auto.arima() . I don't recall any publication that specifically extends ARIMA to multiple seasonalities, although I'm sure somebody has done something along the lines in my previous paragraph. | {
"source": [
"https://stats.stackexchange.com/questions/144158",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/55780/"
]
} |
144,634 | I'm know that Gaussian Process Regression (GPR) is an alternative to using splines for fitting flexible nonlinear models. I would like to know in which situations would one be more suitable than the other, especially in the Bayesian regression framework. I've already looked at What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators? but there does not seem to be anything on GPR in this post. | I agree with @j__'s answer. However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging . If you take a certain type of kernel in Gaussian process regression, you exactly obtain the spline fitting model. This fact is proven in this paper by Kimeldorf and Wahba (1970) . It is rather technical, as it uses the link between the kernels used in kriging and Reproducing Kernel Hilbert Spaces (RKHS). | {
"source": [
"https://stats.stackexchange.com/questions/144634",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29770/"
]
} |
144,635 | A salesperson has a probability of 70% to make a sale. | I agree with @j__'s answer. However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging . If you take a certain type of kernel in Gaussian process regression, you exactly obtain the spline fitting model. This fact is proven in this paper by Kimeldorf and Wahba (1970) . It is rather technical, as it uses the link between the kernels used in kriging and Reproducing Kernel Hilbert Spaces (RKHS). | {
"source": [
"https://stats.stackexchange.com/questions/144635",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/69807/"
]
} |
144,900 | In two popular language identification libraries, Compact Language Detector 2 for C++ and language detector for java, both of them used (character based) n-grams to extract text features. Why is a bag-of-words (single word/dictionary) not used, and what is the advantage and disadvantage of bag-of-words and n-grams? Also, what are some other uses of n-grams model in text classification? Oh oops. Seems like there is a similar question here: Regarding using bigram (N-gram) model to build feature vector for text document But can someone give a more comprehensive answer? Which is better in the case of language identification? (Hopefully I got the meaning of n-grams and bag-of-words correct, haha, if not please help me with that.) | I think the most detailed answers can be found in Mehryar Mohri's extensive work on the topic. Here's a link to one of his lecture slides on the topic: https://web.archive.org/web/20151125061427/http://www.cims.nyu.edu/~mohri/amls/lecture_3.pdf The problem of language detection is that human language (words) have structure. For example, in English, it's very common for the letter 'u' to follow the letter 'q,' while this is not the case in transliterated Arabic. n-grams work by capturing this structure. Thus, certain combinations of letters are more likely in some languages than others. This is the basis of n-gram classification. Bag-of-words, on the other hand, depends on searching through a large dictionary and essentially doing template matching. There are two main drawbacks here: 1) each language would have to have an extensive dictionary of words on file, which would take a relatively long time to search through, and 2) bag-of-words will fail if none of the words in the training set are included in the testing set. Assuming that you are using bigrams (n=2) and there are 26 letters in your alphabet, then there are only 26^2 = 676 possible bigrams for that alphabet, many of which will never occur. Therefore, the "profile" (to use language detector's words) for each language needs a very small database. A bag-of-words classifier, on-the-other-hand would need a full dictionary for EACH language in order to guarantee that a language could be detected based on whichever sentence it was given. So in short - each language profile can be quickly generated with a relatively small feature space. Interestingly, n-grams only work because letters are not drawn iid in a language - this is explicitly leverage. Note: the general equation for the number of n-grams for words is l^n where l is the number of letters in the alphabet. | {
"source": [
"https://stats.stackexchange.com/questions/144900",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/72787/"
]
} |
145,122 | I've been watching a lot of tutorial videos and they are look the same. This one for example: https://www.youtube.com/watch?v=ip4iSMRW5X4 They explain states, actions and probabilities which are fine. The person explains it ok but I just can't seem to get a grip on what it would be used for in real-life. I haven't come across any lists as of yet. The most common one I see is chess. Can it be used to predict things? If so what types of things? Can it find patterns amoung infinite amounts of data? What can this algorithm do for me. Bonus: It also feels like MDP's is all about getting from one state to another, is this true? | A Markovian Decision Process indeed has to do with going from one state to another and is mainly used for planning and decision making . The theory Just repeating the theory quickly, an MDP is: $$\text{MDP} = \langle S,A,T,R,\gamma \rangle$$ where $S$ are the states, $A$ the actions, $T$ the transition probabilities (i.e. the probabilities $Pr(s'|s, a)$ to go from one state to another given an action), $R$ the rewards (given a certain state, and possibly action), and $\gamma$ is a discount factor that is used to reduce the importance of the of future rewards. So in order to use it, you need to have predefined: States : these can refer to for example grid maps in robotics, or for example door open and door closed . Actions : a fixed set of actions, such as for example going north, south, east, etc for a robot, or opening and closing a door. Transition probabilities : the probability of going from one state to another given an action. For example, what is the probability of an open door if the action is open . In a perfect world the later could be 1.0, but if it is a robot, it could have failed in handling the doorknob correctly. Another example in the case of a moving robot would be the action north , which in most cases would bring it in the grid cell north of it, but in some cases could have moved too much and reached the next cell for example. Rewards : these are used to guide the planning. In the case of the grid example, we might want to go to a certain cell, and the reward will be higher if we get closer. In the case of the door example, an open door might give a high reward. Once the MDP is defined, a policy can be learned by doing Value Iteration or Policy Iteration which calculates the expected reward for each of the states. The policy then gives per state the best (given the MDP model) action to do. In summary, an MDP is useful when you want to plan an efficient sequence of actions in which your actions can be not always 100% effective. Your questions Can it be used to predict things? I would call it planning, not predicting like regression for example. If so what types of things? See examples . Can it find patterns among infinite amounts of data? MDPs are used to do Reinforcement Learning , to find patterns you need Unsupervised Learning . And no, you cannot handle an infinite amount of data. Actually, the complexity of finding a policy grows exponentially with the number of states $|S|$ . What can this algorithm do for me. See examples . Examples of Applications of MDPs White, D.J. (1993) mentions a large list of applications: Harvesting: how much members of a population have to be left for breeding. Agriculture: how much to plant based on weather and soil state. Water resources: keep the correct water level at reservoirs. Inspection, maintenance and repair: when to replace/inspect based on age, condition, etc. Purchase and production: how much to produce based on demand. Queues: reduce waiting time. ... Finance: deciding how much to invest in stock. Robotics: A dialogue system to interact with people . Robot bartender . Robot exploration for navigation . .. And there are quite some more models. An even more interesting model is the Partially Observable Markovian Decision Process in which states are not completely visible, and instead, observations are used to get an idea of the current state, but this is out of the scope of this question. Additional Information A stochastic process is Markovian (or has the Markov property) if the conditional probability distribution of future states only depend on the current state, and not on previous ones (i.e. not on a list of previous states). | {
"source": [
"https://stats.stackexchange.com/questions/145122",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/61886/"
]
} |
145,159 | I know that if the median and mean are approximately equal then this means there is a symmetric distribution but in this particular case I'm not certain. The mean and median are quite close (only 0.487m/gall difference) which would lead me to say there is a symmetric distribution but looking at the boxplot, it looks like it's slightly positively skewed (the median is closer to Q1 than Q3 as confirmed by the values). (I'm using Minitab if you have any specific advice for this piece of software.) | No doubt you have been told otherwise, but mean $=$ median does not imply symmetry. There's a measure of skewness based on mean minus median (the second Pearson skewness), but it can be 0 when the distribution is not symmetric (like any of the common skewness measures). Similarly, the relationship between mean and median doesn't necessarily imply a similar relationship between the midhinge ($(Q_1+Q_3)/2$) and median. They can suggest opposite skewness, or one may equal the median while the other doesn't. One way to investigate symmetry is via a symmetry plot *. If $Y_{(1)}, Y_{(2)}, ..., Y_{(n)}$ are the ordered observations from smallest to largest (the order statistics), and $M$ is the median, then a symmetry plot plots $Y_{(n)}-M$ vs $M-Y_{(1)}$, $Y_{(n-1)}-M$ vs $M-Y_{(2)}$ , ... and so on. * Minitab can do those . Indeed I raise this plot as a possibility because I've seen them done in Minitab. Here are four examples: $\hspace{6cm} \textbf{Symmetry plots}$ (The actual distributions were (left to right, top row first) - Laplace, Gamma(shape=0.8), beta(2,2) and beta(5,2). The code is Ross Ihaka's, from here ) With heavy-tailed symmetric examples, it's often the case that the most extreme points can be very far from the line; you would pay less attention to the distance from the line of one or two points as you near the top right of the figure. There are of course, other plots (I mentioned the symmetry plot not from a particular sense of advocacy of that particular one, but because I knew it was already implemented in Minitab). So let's explore some others. Here's the corresponding skewplots that Nick Cox suggested in comments: $\hspace{6cm} \textbf{Skewness plots}$ In these plots, a trend up would indicate a typically heavier right tail than left and a trend down would indicate a typically heavier left tail than right, while symmetry would be suggested by a relatively flat (though perhaps fairly noisy) plot. Nick suggests that this plot is better (specifically "more direct"). I am inclined to agree; the interpretation of the plot seems consequently a little easier, though the information in the corresponding plots are often quite similar (after you subtract the unit slope in the first set, you get something very like the second set). [Of course, none of these things will tell us that the distribution the data were drawn from is actually symmetric; we get an indication of how near-to-symmetric the sample is, and so to that extent we can judge if the data are reasonably consistent with being drawn from a near-symmetrical population.] | {
"source": [
"https://stats.stackexchange.com/questions/145159",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/72943/"
]
} |
145,323 | I guess this is a basic question and it has to do with the direction of the gradient itself, but I'm looking for examples where 2nd order methods (e.g. BFGS ) are more effective than simple gradient descent. | Here's a common framework for interpreting both gradient descent and Newton's method, which is maybe a useful way to think of the difference as a supplement to @Sycorax's answer. (BFGS approximates Newton's method; I won't talk about it in particular here.) We're minimizing the function $f$, but we don't know how to do that directly. So, instead, we take a local approximation at our current point $x$ and minimize that. Newton's method approximates the function using a second-order Taylor expansion:
$$
f(y) \approx N_x(y) := f(x) + \nabla f(x)^T (y - x) + \tfrac12 (y - x)^T \, \nabla^2 f(x) \, (y - x)
,$$
where $\nabla f(x)$ denotes the gradient of $f$ at the point $x$ and $\nabla^2 f(x)$ the Hessian at $x$.
It then steps to $\arg\min_y N_x(y)$ and repeats. Gradient descent, only having the gradient and not the Hessian, can't just make a first-order approximation and minimize that, since as @Hurkyl noted it has no minimum. Instead, we define a step size $t$ and step to $x - t \nabla f(x)$. But note that
\begin{align}
x - t \,\nabla f(x)
&= \arg\max_y \left[f(x) + \nabla f(x)^T (y - x) + \tfrac{1}{2 t} \lVert y - x \rVert^2\right]
\\&= \arg\max_y \left[f(x) + \nabla f(x)^T (y - x) + \tfrac12 (y-x)^T \tfrac{1}{t} I (y - x)\right]
.\end{align}
Thus gradient descent minimizes a function
$$G_x(y) := f(x) + \nabla f(x)^T (y - x) + \tfrac12 (y-x)^T \tfrac{1}{t} I (y - x).$$ Thus gradient descent is kind of like using Newton's method, but instead of taking the second-order Taylor expansion, we pretend that the Hessian is $\tfrac1t I$. This $G$ is often a substantially worse approximation to $f$ than $N$, and hence gradient descent often takes much worse steps than Newton's method. This is counterbalanced, of course, by each step of gradient descent being so much cheaper to compute than each step of Newton's method. Which is better depends entirely on the nature of the problem, your computational resources, and your accuracy requirements. Looking at @Sycorax's example of minimizing a quadratic
$$
f(x) = \tfrac12 x^T A x + d^T x + c
$$
for a moment,
it's worth noting that this perspective helps with understanding both methods. With Newton's method, we'll have $N = f$ so that it terminates with the exact answer (up to floating point accuracy issues) in a single step. Gradient descent, on the other hand, uses
$$
G_x(y) = f(x) + (A x + d)^T y + \tfrac12 (x - y)^T \tfrac1t I (x-y)
$$
whose tangent plane at $x$ is correct, but whose curvature is entirely wrong, and indeed throws away the important differences in different directions when the eigenvalues of $A$ vary substantially. | {
"source": [
"https://stats.stackexchange.com/questions/145323",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16052/"
]
} |
145,566 | I am interested in calculating area under the curve (AUC), or the c-statistic, by hand for a binary logistic regression model. For example, in the validation dataset, I have the true value for the dependent variable, retention (1 = retained; 0 = not retained), as well as a predicted retention status for each observation generated by my regression analysis using a model that was built using the training set (this will range from 0 to 1). My initial thoughts were to identify the "correct" number of model classifications and simply divide the number of "correct" observations by the number of total observations to calculate the c-statistic. By "correct", if the true retention status of an observation = 1 and the predicted retention status is > 0.5 then that is a "correct" classification. Additionally, if the true retention status of an observation = 0 and the predicted retention status is < 0.5 then that is also a "correct" classification. I assume a "tie" would occur when the predicted value = 0.5, but that phenomenon does not occur in my validation dataset. On the other hand, "incorrect" classifications would be if the true retention status of an observation = 1 and the predicted retention status is < 0.5 or if the true retention status for an outcome = 0 and the predicted retention status is > 0.5. I am aware of TP, FP, FN, TN, but not aware of how to calculate the c-statistic given this information. | I would recommend Hanley’s & McNeil’s 1982 paper ‘ The meaning and use of the area under a receiver operating characteristic (ROC) curve ’. Example They have the following table of disease status and test result (corresponding to, for example, the estimated risk from a logistic model). The first number on the right is the number of patients with true disease status ‘normal’ and the second number is the number of patients with true disease status ‘abnormal’: (1) Definitely normal: 33/3 (2) Probably normal: 6/2 (3) Questionable: 6/2 (4) Probably abnormal: 11/11 (5) Definitely abnormal: 2/33 So there are in total 58 ‘normal’ patients and ‘51’ abnormal ones. We see that when the predictor is 1, ‘Definitely normal’, the patient is usually normal (true for 33 of the 36 patients), and when it is 5, ‘Definitely abnormal’ the patients is usually abnormal (true for 33 of the 35 patients), so the predictor makes sense. But how should we judge a patient with a score of 2, 3, or 4? What we set our cutoff for judging a patients as abnormal or normal to determines the sensitivity and specificity of the resulting test. Sensitivity and specificity We can calculate the estimated sensitivity and specificity for different cutoffs. (I’ll just write ‘sensitivity’ and ‘specificity’ from now on, letting the estimated nature of the values be implicit.) If we choose our cutoff so that we classify all the patients as abnormal, no matter what their test results says (i.e., we choose the cutoff 1+), we will get a sensitivity of 51/51 = 1. The specificity will be 0/58 = 0. Doesn’t sound so good. OK, so let’s choose a less strict cutoff. We only classify patients as abnormal if they have a test result of 2 or higher. We then miss 3 abnormal patients, and have a sensitivity of 48/51 = 0.94. But we have a much increased specificity, of 33/58 = 0.57. We can now continue this, choosing various cutoffs (3, 4, 5, >5). (In the last case, we won’t classify any patients as abnormal, even if they have the highest possible test score of 5.) The ROC curve If we do this for all possible cutoffs, and the plot the sensitivity against 1 minus the specificity, we get the ROC curve. We can use the following R code: # Data
norm = rep(1:5, times=c(33,6,6,11,2))
abnorm = rep(1:5, times=c(3,2,2,11,33))
testres = c(abnorm,norm)
truestat = c(rep(1,length(abnorm)), rep(0,length(norm)))
# Summary table (Table I in the paper)
( tab=as.matrix(table(truestat, testres)) ) The output is: testres
truestat 1 2 3 4 5
0 33 6 6 11 2
1 3 2 2 11 33 We can calculate various statistics: ( tot=colSums(tab) ) # Number of patients w/ each test result
( truepos=unname(rev(cumsum(rev(tab[2,])))) ) # Number of true positives
( falsepos=unname(rev(cumsum(rev(tab[1,])))) ) # Number of false positives
( totpos=sum(tab[2,]) ) # The total number of positives (one number)
( totneg=sum(tab[1,]) ) # The total number of negatives (one number)
(sens=truepos/totpos) # Sensitivity (fraction true positives)
(omspec=falsepos/totneg) # 1 − specificity (false positives)
sens=c(sens,0); omspec=c(omspec,0) # Numbers when we classify all as normal And using this, we can plot the (estimated) ROC curve: plot(omspec, sens, type="b", xlim=c(0,1), ylim=c(0,1), lwd=2,
xlab="1 − specificity", ylab="Sensitivity") # perhaps with xaxs="i"
grid()
abline(0,1, col="red", lty=2) Manually calculating the AUC We can very easily calculate the area under the ROC curve, using the formula for the area of a trapezoid: height = (sens[-1]+sens[-length(sens)])/2
width = -diff(omspec) # = diff(rev(omspec))
sum(height*width) The result is 0.8931711. A concordance measure The AUC can also be seen as a concordance measure. If we take all possible pairs of patients where one is normal and the other is abnormal, we can calculate how frequently it’s the abnormal one that has the highest (most ‘abnormal-looking’) test result (if they have the same value, we count that this as ‘half a victory’): o = outer(abnorm, norm, "-")
mean((o>0) + .5*(o==0)) The answer is again 0.8931711, the area under the ROC curve. This will always be the case. A graphical view of concordance As pointed out by Harrell in his answer, this also has a graphical interpretation. Let’s plot test score (risk estimate) on the y -axis and true disease status on the x -axis (here with some jittering, to show overlapping points): plot(jitter(truestat,.2), jitter(testres,.8), las=1,
xlab="True disease status", ylab="Test score") Let us now draw a line between each point on the left (a ‘normal’ patient) and each point on the right (an ‘abnormal’ patient). The proportion of lines with a positive slope (i.e., the proportion of concordant pairs) is the concordance index (flat lines count as ‘50% concordance’). It’s a bit difficult to visualise the actual lines for this example, due to the number of ties (equal risk score), but with some jittering and transparency we can get a reasonable plot: d = cbind(x_norm=0, x_abnorm=1, expand.grid(y_norm=norm, y_abnorm=abnorm))
library(ggplot2)
ggplot(d, aes(x=x_norm, xend=x_abnorm, y=y_norm, yend=y_abnorm)) +
geom_segment(colour="#ff000006",
position=position_jitter(width=0, height=.1)) +
xlab("True disease status") + ylab("Test\nscore") +
theme_light() + theme(axis.title.y=element_text(angle=0)) We see that most of the lines slope upwards, so the concordance index will be high. We also see the contribution to the index from each type of observation pair. Most of it comes from normal patients with a risk score of 1 paired with abnormal patients with a risk score of 5 (1–5 pairs), but quite a lot also comes from 1–4 pairs and 4–5 pairs. And it’s very easy to calculate the actual concordance index based on the slope definition: d = transform(d, slope=(y_norm-y_abnorm)/(x_norm-x_abnorm))
mean((d$slope > 0) + .5*(d$slope==0)) The answer is again 0.8931711, i.e., the AUC. The Wilcoxon–Mann–Whitney test There is a close connection between the concordance measure and the Wilcoxon–Mann–Whitney test. Actually, the latter tests if the probability of concordance (i.e., that it’s the abnormal patient in a random normal–abnormal pair that will have the most ‘abnormal-looking’ test result) is exactly 0.5. And its test statistic is just a simple transformation of the estimated concordance probability: > ( wi = wilcox.test(abnorm,norm) )
Wilcoxon rank sum test with continuity correction
data: abnorm and norm
W = 2642, p-value = 1.944e-13
alternative hypothesis: true location shift is not equal to 0 The test statistic ( W = 2642 ) counts the number of concordant pairs. If we divide it by the number of possible pairs, we get a familar number: w = wi$statistic
w/(length(abnorm)*length(norm)) Yes, it’s 0.8931711, the area under the ROC curve. Easier ways to calculate the AUC (in R) But let’s make life easier for ourselves. There are various packages that calculate the AUC for us automatically. The Epi package The Epi package creates a nice ROC curve with various statistics (including the AUC) embedded: library(Epi)
ROC(testres, truestat) # also try adding plot="sp" The pROC package I also like the pROC package, since it can smooth the ROC estimate (and calculate an AUC estimate based on the smoothed ROC): (The red line is the original ROC, and the black line is the smoothed ROC. Also note the default 1:1 aspect ratio. It makes sense to use this, since both the sensitivity and specificity has a 0–1 range.) The estimated AUC from the smoothed ROC is 0.9107, similar to, but slightly larger than, the AUC from the unsmoothed ROC (if you look at the figure, you can easily see why it’s larger). (Though we really have too few possible distinct test result values to calculate a smooth AUC). The rms package Harrell’s rms package can calculate various related concordance statistics using the rcorr.cens() function. The C Index in its output is the AUC: > library(rms)
> rcorr.cens(testres,truestat)[1]
C Index
0.8931711 The caTools package Finally, we have the caTools package and its colAUC() function. It has a few advantages over other packages (mainly speed and the ability to work with multi-dimensional data – see ?colAUC ) that can sometimes be helpful. But of course it gives the same answer as we have calculated over and over: library(caTools)
colAUC(testres, truestat, plotROC=TRUE)
[,1]
0 vs. 1 0.8931711 Final words Many people seem to think that the AUC tells us how ‘good’ a test is. And some people think that the AUC is the probability that the test will correctly classify a patient. It is not . As you can see from the above example and calculations, the AUC tells us something about a family of tests, one test for each possible cutoff. And the AUC is calculated based on cutoffs one would never use in practice. Why should we care about the sensitivity and specificity of ‘nonsensical’ cutoff values? Still, that’s what the AUC is (partially) based on. (Of course, if the AUC is very close to 1, almost every possible test will have great discriminatory power, and we would all be very happy.) The ‘random normal–abnormal’ pair interpretation of the AUC is nice (and can be extended, for instance to survival models, where we see if its the person with the highest (relative) hazard that dies the earliest). But one would never use it in practice. It’s a rare case where one knows one has one healthy and one ill person, doesn’t know which person is the ill one, and must decide which of them to treat. (In any case, the decision is easy; treat the one with the highest estimated risk.) So I think studying the actual ROC curve will be more useful than just looking at the AUC summary measure. And if you use the ROC together with (estimates of the) costs of false positives and false negatives, along with base rates of what you’re studying, you can get somewhere. Also note that the AUC only measures discrimination , not calibration. That is, it measures whether you can discriminate between two persons (one ill and one healthy), based on the risk score. For this, it only looks at relative risk values (or ranks, if you will, cf. the Wilcoxon–Mann–Whitney test interpretation), not the absolute ones, which you should be interested in. For example, if you divide each risk estimate from your logistic model by 2, you will get exactly the same AUC (and ROC). When evaluating a risk model, calibration is also very important. To examine this, you will look at all patients with a risk score of around, e.g., 0.7, and see if approximately 70% of these actually were ill. Do this for each possible risk score (possibly using some sort of smoothing / local regression). Plot the results, and you’ll get a graphical measure of calibration . If have a model with both good calibration and good discrimination, then you start to have good model. :) | {
"source": [
"https://stats.stackexchange.com/questions/145566",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29068/"
]
} |
145,768 | I've found that Imagenet and other large CNN makes use of local response normalization layers. However, I cannot find that much information about them. How important are they and when should they be used? From http://caffe.berkeleyvision.org/tutorial/layers.html#data-layers : "The local response normalization layer performs a kind of “lateral
inhibition” by normalizing over local input regions. In
ACROSS_CHANNELS mode, the local regions extend across nearby channels,
but have no spatial extent (i.e., they have shape local_size x 1 x 1).
In WITHIN_CHANNEL mode, the local regions extend spatially, but are in
separate channels (i.e., they have shape 1 x local_size x local_size).
Each input value is divided by (1+(α/n)∑ix2i)β, where n is the size of
each local region, and the sum is taken over the region centered at
that value (zero padding is added where necessary)." Edit: It seems that these kinds of layers have a minimal impact and are not used any more. Basically, their role have been outplayed by other regularization techniques (such as dropout and batch normalization), better initializations and training methods. See my answer below for more details. | It seems that these kinds of layers have a minimal impact and are not used any more. Basically, their role have been outplayed by other regularization techniques (such as dropout and batch normalization), better initializations and training methods. This is what is written in the lecture notes for the Stanford Course CS321n on ConvNets: Normalization Layer Many types of normalization layers have been proposed for use in
ConvNet architectures, sometimes with the intentions of implementing
inhibition schemes observed in the biological brain. However, these
layers have recently fallen out of favor because in practice their
contribution has been shown to be minimal, if any. For various types
of normalizations, see the discussion in Alex Krizhevsky's
cuda-convnet library API. | {
"source": [
"https://stats.stackexchange.com/questions/145768",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/29025/"
]
} |
145,902 | I just started to study about stats and models stuff. Currently, my understanding is that we use MLE to estimate the best parameter(s) for a model. However, when I try to understand how the neural networks work, it seems like they commonly use another approach to estimate the parameters instead. Why don't we use MLE or is it possible to use MLE at all? | MLE estimates of artificial neural network weights (ANN) certainly are possible ; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is the same as the negative log-likelihood of a binomial model. For regression problems, residual square error is used, which parallels the MLE of OLS regression. See: How to construct a cross-entropy loss for general regression targets? But there are some problems with assuming that the nice properties of MLEs derived in classical statistics, such as uniqueness , also hold for MLEs of neural networks. There is a general problem with ANN estimation: there are many symmetric solutions to even single-layer ANNs. Reversing the signs of the weights for the hidden layer, and reversing the signs of the hidden layer activation parameters both have equal likelihood. Additionally, you can permute any of the hidden nodes and these permutations also have the same likelihood. This is consequential insofar as you must acknowledge that you are giving up identifiability. However, if identifiability is not important, then you can simply accept that these alternative solutions are just reflections and/or permutations of each other. This is in contrast to classical usages of MLE in statistics, such as a OLS regression: the OLS problem is convex, and strictly convex when the design matrix is full rank. Strong convexity implies that there is a single, unique minimizer. It's true that these solutions have the same quality (same loss, same accuracy), but a number of students who arrive at neural networks from an understanding of regression are surprised to learn that NNs are non-convex and do not have unique optimal parameter estimates. ANNs will tend to overfit the data when using an unconstrained solution. The weights will tend to race away from the origin to implausibly large values which do not generalize well or predict new data with much accuracy. Imposing weight decay or other regularization methods has the effect of shrinking weight estimates toward zero. This doesn't necessarily resolve the indeterminacy issue from (1), but it can improve the generalization of the network. The loss function is nonconvex and optimization can find locally optimal solutions which are not globally optimal. Or perhaps these solutions are saddle points, where some optimization methods stall. The results in this paper find that modern estimation methods sidestep this issue. In a classical statistical setting, penalized fit methods such as elastic net, $L^1$ or $L^2$ regularization can make convex a rank-deficient (i.e. non-convex) problem. This fact does not extend to the neural network setting, due to the permutation issue in (1). Even if you restrict the norm of your parameters, permuting the weights or symmetrically reversing signs won't change the norm of the parameter vector; nor will it change the likelihood. Therefore the loss will remain the same for the permuted or reflected models and the model is still non-identified. | {
"source": [
"https://stats.stackexchange.com/questions/145902",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/73316/"
]
} |
146,221 | Identical meaning, that it will produce identical results for a similarity ranking between a vector u and a set of vectors V . I have a vector space model which has distance measure (euclidean distance, cosine similarity) and normalization technique (none, l1, l2) as parameters. From my understanding, the results from the settings [cosine, none] should be identical or at least really really similar to [euclidean, l2], but they aren't. There actually is a good chance the system is still buggy -- or do I have something critical wrong about vectors? edit: I forgot to mention that the vectors are based on word counts from documents in a corpus. Given a query document (which I also transform in a word count vector), I want to find the document from my corpus which is most similar to it. Just calculating their euclidean distance is a straight forward measure, but in the kind of task I work at, the cosine similarity is often preferred as a similarity indicator, because vectors that only differ in length are still considered equal. The document with the smallest distance/cosine similarity is considered the most similar. | For $\ell^2$-normalized vectors $\mathbf{x}, \mathbf{y}$,
$$||\mathbf{x}||_2 = ||\mathbf{y}||_2 = 1,$$
we have that the squared Euclidean distance is proportional to the cosine distance ,
\begin{align}
||\mathbf{x} - \mathbf{y}||_2^2
&= (\mathbf{x} - \mathbf{y})^\top (\mathbf{x} - \mathbf{y}) \\
&= \mathbf{x}^\top \mathbf{x} - 2 \mathbf{x}^\top \mathbf{y} + \mathbf{y}^\top \mathbf{y} \\
&= 2 - 2\mathbf{x}^\top \mathbf{y} \\
&= 2 - 2 \cos\angle(\mathbf{x}, \mathbf{y})
\end{align}
That is, even if you normalized your data and your algorithm was invariant to scaling of the distances, you would still expect differences because of the squaring. | {
"source": [
"https://stats.stackexchange.com/questions/146221",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/66771/"
]
} |
146,277 | Hinge loss can be defined using $\text{max}(0, 1-y_i\mathbf{w}^T\mathbf{x}_i)$ and the log loss can be defined as $\text{log}(1 + \exp(-y_i\mathbf{w}^T\mathbf{x}_i))$ I have the following questions: Are there any disadvantages of hinge loss (e.g. sensitive to outliers as mentioned in this article ) ? What are the differences, advantages, disadvantages of one compared to the other? | Logarithmic loss minimization leads to well-behaved probabilistic outputs. Hinge loss leads to some (not guaranteed) sparsity on the dual, but it doesn't help at probability estimation. Instead, it punishes misclassifications (that's why it's so useful to determine margins): diminishing hinge-loss comes with diminishing across margin misclassifications. So, summarizing: Logarithmic loss leads to better probability estimation at the cost of accuracy Hinge loss leads to better accuracy and some sparsity at the cost of much less sensitivity regarding probabilities | {
"source": [
"https://stats.stackexchange.com/questions/146277",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/11409/"
]
} |
146,804 | I have a question about two different methods from different libraries which seems doing same job. I am trying to make linear regression model. Here is the code which I using statsmodel library with OLS : X_train, X_test, y_train, y_test = cross_validation.train_test_split(x, y, test_size=0.3, random_state=1)
x_train = sm.add_constant(X_train)
model = sm.OLS(y_train, x_train)
results = model.fit()
print "GFT + Wiki / GT R-squared", results.rsquared This print out GFT + Wiki / GT R-squared 0.981434611923 and the second one is scikit learn library Linear model method: model = LinearRegression()
model.fit(X_train, y_train)
predictions = model.predict(X_test)
print 'GFT + Wiki / GT R-squared: %.4f' % model.score(X_test, y_test) This print out GFT + Wiki / GT R-squared: 0.8543 So my question is the both method prints our R^2 result but one is print out 0.98 and the other one is 0.85. From my understanding, OLS works with training dataset. So my questions, Is there a way that work with test data set with OLS ? Is the traning data set score gives us any meaning(In OLS we didn't use test data set)? From my past knowledge we have to work with test data. What is the difference between OLS and scikit linear regression. Which one we use for calculating the score of the model ? Thanks for any help. | First in terms of usage. You can get the prediction in statsmodels in a very similar way as in scikit-learn, except that we use the results instance returned by fit predictions = results.predict(X_test) Given the predictions, we can calculate statistics that are based on the prediction error prediction_error = y_test - predictions There is a separate list of functions to calculate goodness of prediction statistics with it, but it's not integrated into the models, nor does it include R squared. (I've never heard of R squared used for out of sample data.) Calculating those requires a bit more work by the user and statsmodels does not have the same set of statistics, especially not for classification or models with a binary response variable. To your other two points: Linear regression is in its basic form the same in statsmodels and in scikit-learn. However, the implementation differs which might produce different results in edge cases, and scikit learn has in general more support for larger models. For example, statsmodels currently uses sparse matrices in very few parts. The most important difference is in the surrounding infrastructure and the use cases that are directly supported. Statsmodels follows largely the traditional model where we want to know how well a given model fits the data, and what variables "explain" or affect the outcome, or what the size of the effect is.
Scikit-learn follows the machine learning tradition where the main supported task is chosing the "best" model for prediction. As a consequence, the emphasis in the supporting features of statsmodels is in analysing the training data which includes hypothesis tests and goodness-of-fit measures, while the emphasis in the supporting infrastructure in scikit-learn is on model selection for out-of-sample prediction and therefore cross-validation on "test data". This points out the distinction, there is still quite a lot of overlap also in the usage. statsmodels also does prediction, and additionally forecasting in a time series context.
But, when we want to do cross-validation for prediction in statsmodels it is currently still often easier to reuse the cross-validation setup of scikit-learn together with the estimation models of statsmodels. | {
"source": [
"https://stats.stackexchange.com/questions/146804",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/103833/"
]
} |
146,907 | What techniques are available for collapsing (or pooling) many categories to a few, for the purpose of using them as an input (predictor) in a statistical model? Consider a variable like college student major (discipline chosen by an undergraduate student). It is unordered and categorical, but it can potentially have dozens of distinct levels. Let's say I want to use major as a predictor in a regression model. Using these levels as-is for modeling leads to all sorts of issues because there are just so many. A lot of statistical precision would be thrown away to use them, and the results are hard to interpret. We're rarely interested in specific majors -- we're much more likely to be interested in broad categories (subgroups) of majors. But it isn't always clear how to divide up the levels into such higher-level categories, or even how many higher-level categories to use. For typical data I would be happy to use factor analysis, matrix factorization, or a discrete latent modeling technique. But majors are mutually exclusive categories, so I'm hesitant to exploit their covariance for anything. Furthermore I don't care about the major categories on their own. I care about producing higher-level categories that are coherent with respect to my regression outcome . In the binary outcome case, that suggests to me something like linear discriminant analysis (LDA) to generate higher-level categories that maximize discriminative performance. But LDA is a limited technique and that feels like dirty data dredging to me. Moreover any continuous solution will be hard to interpret. Meanwhile something based on covariances, like multiple correspondence analysis (MCA), seems suspect to me in this case because of the inherent dependence among mutually exclusive dummy variables -- they're better suited for studying multiple categorical variables, rather than multiple categories of the same variable. edit : to be clear, this is about collapsing categories (not selecting them), and the categories are predictors or independent variables. In hindsight, this problem seems like an appropriate time to "regularize 'em all and let God sort 'em out". Glad to see this question is interesting to so many people! | If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) the coefficients might be exactly the same. So perhaps the regression coefficients for Maths and Physics are the same, but different from those for Chemistry and Biology. In a simplest case, you would have a "one way ANOVA" linear model with a single categorical predictor: $$y_{ij} = \mu + \alpha_i + \epsilon_{ij},$$ where $i$ encodes the level of the categorical variable (the category). But you might prefer a solution that collapses some levels (categories) together, e.g. $$\begin{cases}\alpha_1=\alpha_2, \\ \alpha_3=\alpha_4=\alpha_5.\end{cases}$$ This suggests that one can try to use a regularization penalty that would penalize solutions with differing alphas. One penalty term that immediately comes to mind is $$L=\omega \sum_{i<j}|\alpha_i-\alpha_j|.$$ This resembles lasso and should enforce sparsity of the $\alpha_i-\alpha_j$ differences, which is exactly what you want: you want many of them to be zero. Regularization parameter $\omega$ should be selected with cross-validation. I have never dealt with models like that and the above is the first thing that came to my mind. Then I decided to see if there is something like that implemented. I made some google searches and soon realized that this is called fusion of categories; searching for lasso fusion categorical will give you a lot of references to read. Here are a few that I briefly looked at: Gerhard Tutz, Regression for Categorical Data, see pp. 175-175 in Google Books . Tutz mentions the following four papers: Land and Friedman, 1997, Variable fusion: a new adaptive signal regression method Bondell and Reich, 2009, Simultaneous factor selection and collapsing levels in ANOVA Gertheiss and Tutz, 2010, Sparse modeling of categorial explanatory variables Tibshirani et al. 2005, Sparsity and smoothness via the fused lasso is somewhat relevant even if not exactly the same (it is about ordinal variables) Gertheiss and Tutz 2010, published in the Annals of Applied Statistics, looks like a recent and very readable paper that contains other references. Here is its abstract: Shrinking methods in regression analysis are usually designed for metric
predictors. In this article, however, shrinkage methods for categorial predictors
are proposed. As an application we consider data from the Munich rent
standard, where, for example, urban districts are treated as a categorial predictor.
If independent variables are categorial, some modifications to usual
shrinking procedures are necessary. Two $L_1$-penalty based methods for factor
selection and clustering of categories are presented and investigated. The
first approach is designed for nominal scale levels, the second one for ordinal
predictors. Besides applying them to the Munich rent standard, methods are
illustrated and compared in simulation studies. I like their Lasso-like solution paths that show how levels of two categorical variables get merged together when regularization strength increases: | {
"source": [
"https://stats.stackexchange.com/questions/146907",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/36229/"
]
} |
147,001 | When we conduct linear regression $y=ax+b$ to fit a bunch of data points $(x_1,y_1),(x_2,y_2),...,(x_n,y_n)$, the classic approach minimizes the squared error. I have long been puzzled by a question that will minimizing the squared error yield the same result as minimizing the absolute error ? If not, why minimizing squared error is better? Is there any reason other than "the objective function is differentiable"? Squared error is also widely used to evaluate model performance, but absolute error is less popular. Why squared error is more commonly used than the absolute error? If taking derivatives is not involved, calculating absolute error is as easy as calculating squared error, then why squared error is so prevalent ? Is there any unique advantage that can explain its prevalence? Thank you. | Minimizing square errors (MSE) is definitely not the same as minimizing absolute deviations (MAD) of errors. MSE provides the mean response of $y$ conditioned on $x$, while MAD provides the median response of $y$ conditioned on $x$. Historically, Laplace originally considered the maximum observed error as a measure of the correctness of a model. He soon moved to considering MAD instead. Due to his inability to exact solving both situations, he soon considered the differential MSE. Himself and Gauss (seemingly concurrently) derived the normal equations, a closed-form solution for this problem. Nowadays, solving the MAD is relatively easy by means of linear programming. As it is well known, however, linear programming does not have a closed-form solution. From an optimization perspective, both correspond to convex functions. However, MSE is differentiable, thus, allowing for gradient-based methods, much efficient than their non-differentiable counterpart. MAD is not differentiable at $x=0$. A further theoretical reason is that, in a bayesian setting, when assuming uniform priors of the model parameters, MSE yields normal distributed errors, which has been taken as a proof of correctness of the method. Theorists like the normal distribution because they believed it is an empirical fact, while experimentals like it because they believe it a theoretical result. A final reason of why MSE may have had the wide acceptance it has is that it is based on the euclidean distance (in fact it is a solution of the projection problem on an euclidean banach space) which is extremely intuitive given our geometrical reality. | {
"source": [
"https://stats.stackexchange.com/questions/147001",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/70786/"
]
} |
147,836 | I want to get a prediction interval around a prediction from a lmer() model. I have found some discussion about this: http://rstudio-pubs-static.s3.amazonaws.com/24365_2803ab8299934e888a60e7b16113f619.html http://glmm.wikidot.com/faq but they seem to not take the uncertainty of the random effects into account. Here's a specific example. I am racing gold fish. I have data on the past 100 races. I want to predict the 101st, taking into account uncertainty of my RE estimates, and FE estimates. I am including a random intercept for fish (there are 10 different fish), and fixed effect for weight (less heavy fish are quicker). library("lme4")
fish <- as.factor(rep(letters[1:10], each=100))
race <- as.factor(rep(900:999, 10))
oz <- round(1 + rnorm(1000)/10, 3)
sec <- 9 + rep(1:10, rep(100,10))/10 + oz + rnorm(1000)/10
fishDat <- data.frame(fishID = fish,
raceID = race, fishWt = oz, time = sec)
head(fishDat)
plot(fishDat$fishID, fishDat$time)
lme1 <- lmer(time ~ fishWt + (1 | fishID), data=fishDat)
summary(lme1) Now, to predict the 101st race. The fish have been weighed and are ready to go: newDat <- data.frame(fishID = letters[1:10],
raceID = rep(1000, 10),
fishWt = 1 + round(rnorm(10)/10, 3))
newDat$pred <- predict(lme1, newDat)
newDat
fishID raceID fishWt pred
1 a 1000 1.073 10.15348
2 b 1000 1.001 10.20107
3 c 1000 0.945 10.25978
4 d 1000 1.110 10.51753
5 e 1000 0.910 10.41511
6 f 1000 0.848 10.44547
7 g 1000 0.991 10.68678
8 h 1000 0.737 10.56929
9 i 1000 0.993 10.89564
10 j 1000 0.649 10.65480 Fish D has really let himself go (1.11 oz) and is actually predicted to lose to Fish E and Fish F, both of whom he has been better than in the past. However, now I want to be able to say, "Fish E (weighing 0.91oz) will beat Fish D (weighing 1.11oz) with probability p." Is there a way to make such a statement using lme4? I want my probability p to take into account my uncertainty in both the fixed effect, and the random effect. Thanks! PS looking at the predict.merMod documentation, it suggests "There is no option for computing standard errors of predictions because it is difficult to define an efficient method that incorporates uncertainty in the variance parameters; we recommend bootMer for this task," but by golly, I cannot see how to use bootMer to do this. It seems bootMer would be used to get bootstrapped confidence intervals for parameter estimates, but I could be wrong. UPDATED Q: OK, I think I was asking the wrong question. I want to be able to say, "Fish A, weighing w oz, will have a race time that is (lcl, ucl) 90% of the time." In the example I have laid out, Fish A, weighing 1.0 oz, will have a race time of 9 + 0.1 + 1 = 10.1 sec on average, with a standard deviation of 0.1. Thus, his observed race time will be between x <- rnorm(mean = 10.1, sd = 0.1, n=10000)
quantile(x, c(0.05,0.50,0.95))
5% 50% 95%
9.938541 10.100032 10.261243 90% of the time. I want a prediction function that attempts to give me that answer. Setting all fishWt = 1.0 in newDat , re-running the sim, and using (as suggested by Ben Bolker below) predFun <- function(fit) {
predict(fit,newDat)
}
bb <- bootMer(lme1,nsim=1000,FUN=predFun, use.u = FALSE)
predMat <- bb$t gives > quantile(predMat[,1], c(0.05,0.50,0.95))
5% 50% 95%
10.01362 10.55646 11.05462 This seems to actually be centered around the population average? As if it's not taking the FishID effect into account? I thought maybe it was a sample size issue, but when I bumped the number of observed races from 100 to 10000, I still get similar results. I'll note bootMer uses use.u=FALSE by default. On the flip side, using bb <- bootMer(lme1,nsim=1000,FUN=predFun, use.u = TRUE) gives > quantile(predMat[,1], c(0.05,0.50,0.95))
5% 50% 95%
10.09970 10.10128 10.10270 That interval is too narrow, and would seem to be a confidence interval for Fish A's mean time. I want a confidence interval for Fish A's observed race time, not his average race time. How can I get that? UPDATE 2, ALMOST: I thought I found what I was looking for in Gelman and Hill (2007) , page 273. Need to utilize the arm package. library("arm") For Fish A: x.tilde <- 1 #observed fishWt for new race
sigma.y.hat <- sigma.hat(lme1)$sigma$data #get uncertainty estimate of our model
coef.hat <- as.matrix(coef(lme1)$fishID)[1,] #get intercept (random) and fishWt (fixed) parameter estimates
y.tilde <- rnorm(1000, coef.hat %*% c(1, x.tilde), sigma.y.hat) #simulate
quantile (y.tilde, c(.05, .5, .95))
5% 50% 95%
9.930695 10.100209 10.263551 For all the fishes: x.tilde <- rep(1,10) #assume all fish weight 1 oz
#x.tilde <- 1 + rnorm(10)/10 #alternatively, draw random weights as in original example
sigma.y.hat <- sigma.hat(lme1)$sigma$data
coef.hat <- as.matrix(coef(lme1)$fishID)
y.tilde <- matrix(rnorm(1000, coef.hat %*% matrix(c(rep(1,10), x.tilde), nrow = 2 , byrow = TRUE), sigma.y.hat), ncol = 10, byrow = TRUE)
quantile (y.tilde[,1], c(.05, .5, .95))
5% 50% 95%
9.937138 10.102627 10.234616 Actually, this probably isn't exactly what I want. I'm only taking into account the overall model uncertainty. In a situation where I have, say, 5 observed races for Fish K and 1000 observed races for Fish L, I think the uncertainty associated with my prediction for Fish K should be much larger than the uncertainty associated with my prediction for Fish L. Will look further into Gelman and Hill 2007. I feel I may end up having to switch to BUGS (or Stan). UPDATE THE 3rd: Perhaps I am conceptualizing things poorly. Using the predictInterval() function given by Jared Knowles in an answer below gives intervals that aren't quite what I would expect... library("lattice")
library("lme4")
library("ggplot2")
fish <- c(rep(letters[1:10], each = 100), rep("k", 995), rep("l", 5))
oz <- round(1 + rnorm(2000)/10, 3)
sec <- 9 + c(rep(1:10, each = 100)/10,rep(1.1, 995), rep(1.2, 5)) + oz + rnorm(2000)
fishDat <- data.frame(fishID = fish, fishWt = oz, time = sec)
dim(fishDat)
head(fishDat)
plot(fishDat$fishID, fishDat$time)
lme1 <- lmer(time ~ fishWt + (1 | fishID), data=fishDat)
summary(lme1)
dotplot(ranef(lme1, condVar = TRUE)) I have added two new fish. Fish K, for whom we have observed 995 races, and Fish L, for whom we have observed 5 races. We have observed 100 races for Fish A-J. I fit the same lmer() as before. Looking at the dotplot() from the lattice package: By default, dotplot() reorders the random effects by their point estimate. The estimate for Fish L is on the top line, and has a very wide confidence interval. Fish K is on the third line, and has a very narrow confidence interval. This makes sense to me. We have lots of data on Fish K, but not a lot of data on Fish L, so we are more confident in our guesstimate about Fish K's true swimming speed. Now, I would think this would lead to a narrow prediction interval for Fish K, and a wide prediction interval for Fish L when using predictInterval() . Howeva: newDat <- data.frame(fishID = letters[1:12],
fishWt = 1)
preds <- predictInterval(lme1, newdata = newDat, n.sims = 999)
preds
ggplot(aes(x=letters[1:12], y=fit, ymin=lwr, ymax=upr), data=preds) +
geom_point() +
geom_linerange() +
labs(x="Index", y="Prediction w/ 95% PI") + theme_bw() All of those prediction intervals appear to be identical in width. Why isn't our prediction for Fish K narrower the others? Why isn't our prediction for Fish L wider than others? | This question and excellent exchange was the impetus for creating the predictInterval function in the merTools package. bootMer is the way to go, but for some problems it is not feasible computationally to generate bootstrapped refits of the whole model (in cases where the model is large). In those cases, predictInterval is designed to use the arm::sim functions to generate distributions of parameters in the model and then to use those distributions to generate simulated values of the response given the newdata provided by the user. It's simple to use -- all you would need to do is: library(merTools)
preds <- predictInterval(lme1, newdata = newDat, n.sims = 999) You can specify a whole host of other values to predictInterval including setting the interval for the prediction intervals, choosing whether to report the mean or median of the distribution, and choosing whether or not to include the residual variance from the model. It's not a full prediction interval because the variability of the theta parameters in the lmer object are not included, but all of the other variation is captured through this method, giving a pretty decent approximation. | {
"source": [
"https://stats.stackexchange.com/questions/147836",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/74305/"
]
} |
148,004 | I am used to seeing Ljung-Box test used quite frequently for testing autocorrelation in raw data or in model residuals. I had nearly forgotten that there is another test for autocorrelation, namely, Breusch-Godfrey test. Question: what are the main differences and similarities of the Ljung-Box and the Breusch-Godfrey tests, and when should one be preferred over the other? (References are welcome. Somehow I was not able to find any comparisons of the two tests although I looked in a few textbooks and searched for material online. I was able to find the descriptions of each test separately , but what I am interested in is the comparison of the two.) | There are some strong voices in the Econometrics community against the validity of the Ljung-Box $Q$-statistic for testing for autocorrelation based on the residuals from an autoregressive model (i.e. with lagged dependent variables in the regressor matrix), see particularly Maddala (2001) "Introduction to Econometrics (3d edition), ch 6.7, and 13. 5 p 528 . Maddala literally laments the widespread use of this test, and instead considers as appropriate the "Langrange Multiplier" test of Breusch and Godfrey. Maddala's argument against the Ljung-Box test is the same as the one raised against another omnipresent autocorrelation test, the "Durbin-Watson" one: with lagged dependent variables in the regressor matrix, the test is biased in favor of maintaining the null hypothesis of "no-autocorrelation" (the Monte-Carlo results obtained in @javlacalle answer allude to this fact). Maddala also mentions the low power of the test, see for example Davies, N., & Newbold, P. (1979). Some power studies of a portmanteau test of time series model specification. Biometrika, 66(1), 153-155 . Hayashi(2000) , ch. 2.10 "Testing For serial correlation" , presents a unified theoretical analysis, and I believe, clarifies the matter. Hayashi starts from zero:
For the Ljung-Box $Q$-statistic to be asymptotically distributed as a chi-square, it must be the case that the process $\{z_t\}$ (whatever $z$ represents), whose sample autocorrelations we feed into the statistic is, under the null hypothesis of no autocorrelation, a martingale-difference sequence, i.e. that it satisfies $$E(z_t \mid z_{t-1}, z_{t-2},...) = 0$$ and also it exhibits "own" conditional homoskedasticity $$E(z^2_t \mid z_{t-1}, z_{t-2},...) = \sigma^2 >0$$ Under these conditions the Ljung-Box $Q$-statistic (which is a corrected-for-finite-samples variant of the original Box-Pierce $Q$-statistic), has asymptotically a chi-squared distribution, and its use has asymptotic justification. Assume now that we have specified an autoregressive model (that perhaps includes also independent regressors in addition to lagged dependent variables), say $$y_t = \mathbf x_t'\beta + \phi(L)y_t + u_t$$ where $\phi(L)$ is a polynomial in the lag operator, and we want to test for serial correlation by using the residuals of the estimation. So here $z_t \equiv \hat u_t$. Hayashi shows that in order for the Ljung-Box $Q$-statistic based on the sample autocorrelations of the residuals, to have an asymptotic chi-square distribution under the null hypothesis of no autocorrelation, it must be the case that all regressors are "strictly exogenous" to the error term in the following sense: $$E(\mathbf x_t\cdot u_s) = 0 ,\;\; E(y_t\cdot u_s)=0 \;\;\forall t,s$$ The "for all $t,s$" is the crucial requirement here, the one that reflects strict exogeneity. And it does not hold when lagged dependent variables exist in the regressor matrix. This is easily seen: set $s= t-1$ and then $$E[y_t u_{t-1}] = E[(\mathbf x_t'\beta + \phi(L)y_t + u_t)u_{t-1}] =$$ $$ E[\mathbf x_t'\beta \cdot u_{t-1}]+ E[\phi(L)y_t \cdot u_{t-1}]+E[u_t \cdot u_{t-1}] \neq 0 $$ even if the $X$'s are independent of the error term, and even if the error term has no-autocorrelation : the term $E[\phi(L)y_t \cdot u_{t-1}]$ is not zero. But this proves that the Ljung-Box $Q$ statistic is not valid in an autoregressive model, because it cannot be said to have an asymptotic chi-square distribution under the null. Assume now that a weaker condition than strict exogeneity is satisfied, namely that $$E(u_t \mid \mathbf x_t, \mathbf x_{t-1},...,\phi(L)y_t, u_{t-1}, u_{t-2},...) = 0$$ The strength of this condition is "inbetween" strict exogeneity and orthogonality. Under the null of no autocorrelation of the error term, this condition is "automatically" satisfied by an autoregressive model, with respect to the lagged dependent variables (for the $X$'s it must be separately assumed of course). Then, there exists another statistic based on the residual sample autocorrelations, ( not the Ljung-Box one), that does have an asymptotic chi-square distribution under the null. This other statistic can be calculated, as a convenience, by using the "auxiliary regression" route: regress the residuals $\{\hat u_t\}$ on the full regressor matrix and on past residuals (up to the lag we have used in the specification), obtain the uncentered $R^2$ from this auxilliary regression and multiply it by the sample size. This statistic is used in what we call the "Breusch-Godfrey test for serial correlation" . It appears then that, when the regressors include lagged dependent variables (and so in all cases of autoregressive models also), the Ljung-Box test should be abandoned in favor of the Breusch-Godfrey LM test. , not because "it performs worse", but because it does not possess asymptotic justification. Quite an impressive result, especially judging from the ubiquitous presence and application of the former. UPDATE: Responding to doubts raised in the comments as to whether all the above apply also to "pure" time series models or not (i.e. without "$x$"-regressors), I have posted a detailed examination for the AR(1) model, in https://stats.stackexchange.com/a/205262/28746 . | {
"source": [
"https://stats.stackexchange.com/questions/148004",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/53690/"
]
} |
148,439 | In statistical inference , problem 9.6b, a "Highest Density Region (HDR)" is mentioned. However, I didn't find the definition of this term in the book. One similar term is the Highest Posterior Density (HPD). But it doesn't fit in this context, since 9.6b doesn't mention anything about a prior. And in the suggested solution , it only says that "obviously $c(y)$ is a HDR". Or is the HDR a region containing the mode(s) of a pdf? What is a Highest Density Region (HDR)? | I recommend Rob Hyndman's 1996 article "Computing and Graphing Highest Density Regions" in The American Statistician . Here is the definition of the HDR, taken from that article: Let $f(x)$ be the density function of a random variable $X$. Then the
$100(1-\alpha)\%$ HDR is the subset $R(f_\alpha)$ of the sample space
of $X$ such that
$$R(f_\alpha) = \{x\colon f(x)\geq f_\alpha\},$$
where $f_\alpha$ is the largest constant such that
$$P\big(X\in R(f_\alpha)\big)\geq 1-\alpha.$$ Figure 1 from that article illustrates the difference between the 75% HDR (so $\alpha=0.25$) and various other 75% Probability Regions for a mixture of two normals ($c_q$ is the $q$-th quantile, $\mu$ the mean and $\sigma$ the standard deviation of the density): The idea in one dimension is to take a horizontal line and shift it up (to $y=f_\alpha$) until the area above it and under the density is $1-\alpha$. Then the HDR $R_\alpha$ is the projection to the $x$ axis of this area. Of course, all this works with any density, whether Bayesian posterior or other. Here is a link to R code, which is the hdrcde package (and to the article on JSTOR). | {
"source": [
"https://stats.stackexchange.com/questions/148439",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/68407/"
]
} |
148,638 | I was reading the following link on non linear regression SAS Non Linear . My understanding from reading the first section "Nonlinear Regression vs. Linear Regression" was that the equation below is actually a linear regression, is that correct? If so why? $$y = b_1x^3 + b_2x^2 + b_3x + c$$ Am I also to understand that in non linear regression multicollinearity isn't an issue? I know that multicollinearity can be an issue in linear regression so surely if the model above is in fact a linear regression there would be multicollinearity? | There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model $$Y = f(X,\theta,\varepsilon).$$ To keep the discussion simple, take the independent variables $X$ to be fixed and accurately measured (rather than random variables). They model $n$ observations of $p$ attributes each, giving rise to the $n$-vector of responses $Y$. Conventionally, $X$ is represented as an $n\times p$ matrix and $Y$ as a column $n$-vector. The (finite $q$-vector) $\theta$ comprises the parameters . $\varepsilon$ is a vector-valued random variable. It usually has $n$ components, but sometimes has fewer. The function $f$ is vector-valued (with $n$ components to match $Y$) and is usually assumed continuous in its last two arguments ($\theta$ and $\varepsilon$). The archetypal example , of fitting a line to $(x,y)$ data, is the case where $X$ is a vector of numbers $(x_i,\,i=1,2,\ldots,n)$--the x-values; $Y$ is a parallel vector of $n$ numbers $(y_i)$; $\theta = (\alpha,\beta)$ gives the intercept $\alpha$ and slope $\beta$; and $\varepsilon = (\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_n)$ is a vector of "random errors" whose components are independent (and usually assumed to have identical but unknown distributions of mean zero). In the preceding notation, $$y_i = \alpha + \beta x_i +\varepsilon_i = f(X,\theta,\varepsilon)_i$$ with $\theta = (\alpha,\beta)$. The regression function may be linear in any (or all) of its three arguments: "Linear regression, or a "linear model," ordinarily means that $f$ is linear as a function of the parameters $\theta$. The SAS meaning of "nonlinear regression" is in this sense, with the added assumption that $f$ is differentiable in its second argument (the parameters). This assumption makes it easier to find solutions. A "linear relationship between $X$ and $Y$" means $f$ is linear as a
function of $X$. A model has additive errors when $f$ is linear in $\varepsilon$.
In such cases it is always assumed that $\mathbb{E}(\varepsilon) =
0$. (Otherwise, it wouldn't be right to think of $\varepsilon$ as
"errors" or "deviations" from "correct" values.) Every possible combination of these characteristics can happen and is useful. Let's survey the possibilities. A linear model of a linear relationship with additive errors. This is ordinary (multiple) regression, already exhibited above and more generally written as $$Y = X\theta + \varepsilon.$$ $X$ has been augmented, if necessary, by adjoining a column of constants, and $\theta$ is a $p$-vector. A linear model of a nonlinear relationship with additive errors. This can be couched as a multiple regression by augmenting the columns of $X$ with nonlinear functions of $X$ itself. For instance, $$y_i = \alpha + \beta x_i^2 + \varepsilon$$ is of this form. It is linear in $\theta=(\alpha,\beta)$; it has additive errors; and it is linear in the values $(1,x_i^2)$ even though $x_i^2$ is a nonlinear function of $x_i$. A linear model of a linear relationship with nonadditive errors. An example is multiplicative error, $$y_i = (\alpha + \beta x_i)\varepsilon_i.$$ (In such cases the $\varepsilon_i$ can be interpreted as "multiplicative errors" when the location of $\varepsilon_i$ is $1$. However, the proper sense of location is not necessarily the expectation $\mathbb{E}(\varepsilon_i)$ anymore: it might be the median or the geometric mean, for instance. A similar comment about location assumptions applies, mutatis mutandis , in all other non-additive-error contexts too.) A linear model of a nonlinear relationship with nonadditive errors. E.g. , $$y_i = (\alpha + \beta x_i^2)\varepsilon_i.$$ A nonlinear model of a linear relationship with additive errors. A nonlinear model involves combinations of its parameters that not only are nonlinear, they cannot even be linearized by re-expressing the parameters. As a non-example, consider $$y_i = \alpha\beta + \beta^2 x_i + \varepsilon_i.$$ By defining $\alpha^\prime = \alpha\beta$ and $\beta^\prime=\beta^2$, and restricting $\beta^\prime \ge 0$, this model can be rewritten $$y_i = \alpha^\prime + \beta^\prime x_i + \varepsilon_i,$$ exhibiting it as a linear model (of a linear relationship with additive errors). As an example, consider $$y_i = \alpha + \alpha^2 x_i + \varepsilon_i.$$ It is impossible to find a new parameter $\alpha^\prime$, depending on $\alpha$, that will linearize this as a function of $\alpha^\prime$ (while keeping it linear in $x_i$ as well). A nonlinear model of a nonlinear relationship with additive errors. $$y_i = \alpha + \alpha^2 x_i^2 + \varepsilon_i.$$ A nonlinear model of a linear relationship with nonadditive errors. $$y_i = (\alpha + \alpha^2 x_i)\varepsilon_i.$$ A nonlinear model of a nonlinear relationship with nonadditive errors. $$y_i = (\alpha + \alpha^2 x_i^2)\varepsilon_i.$$ Although these exhibit eight distinct forms of regression, they do not constitute a classification system because some forms can be converted into others. A standard example is the conversion of a linear model with nonadditive errors (assumed to have positive support) $$y_i = (\alpha + \beta x_i)\varepsilon_i$$ into a linear model of a nonlinear relationship with additive errors via the logarithm,
$$\log(y_i) = \mu_i + \log(\alpha + \beta x_i) + (\log(\varepsilon_i) - \mu_i)$$ Here, the log geometric mean $\mu_i = \mathbb{E}\left(\log(\varepsilon_i)\right)$ has been removed from the error terms (to ensure they have zero means, as required) and incorporated into the other terms (where its value will need to be estimated). Indeed, one major reason to re-express the dependent variable $Y$ is to create a model with additive errors. Re-expression can also linearize $Y$ as a function of either (or both) of the parameters and explanatory variables. Collinearity Collinearity (of the column vectors in $X$) can be an issue in any form of regression. The key to understanding this is to recognize that collinearity leads to difficulties in estimating the parameters. Abstractly and quite generally, compare two models $Y = f(X,\theta,\varepsilon)$ and $Y=f(X^\prime,\theta,\varepsilon^\prime)$ where $X^\prime$ is $X$ with one column slightly changed. If this induces enormous changes in the estimates $\hat\theta$ and $\hat\theta^\prime$, then obviously we have a problem. One way in which this problem can arise is in a linear model, linear in $X$ (that is, types (1) or (5) above), where the components of $\theta$ are in one-to-one correspondence with the columns of $X$. When one column is a non-trivial linear combination of the others, the estimate of its corresponding parameter can be any real number at all. That is an extreme example of such sensitivity. From this point of view it should be clear that collinearity is a potential problem for linear models of nonlinear relationships (regardless of the additivity of the errors) and that this generalized concept of collinearity is potentially a problem in any regression model. When you have redundant variables, you will have problems identifying some parameters. | {
"source": [
"https://stats.stackexchange.com/questions/148638",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/41370/"
]
} |
148,803 | In linear regression, each predicted value is assumed to have been picked from a normal distribution of possible values. See below. But why is each predicted value assumed to have come from a normal distribution? How does linear regression use this assumption? What if possible values are not normally distributed? | Linear regression by itself does not need the normal (gaussian) assumption, the estimators can be calculated (by linear least squares) without any need of such assumption, and makes perfect sense without it. But then, as statisticians we want to understand some of the properties of this method, answers to questions such as: are the least squares estimators optimal in some sense? or can we do better with some alternative estimators? Then, under the normal distribution of error terms, we can show that this estimators are, indeed, optimal, for instance they are "unbiased of minimum variance", or maximum likelihood. No such thing can be proved without the normal assumption. Also, if we want to construct (and analyze properties of) confidence intervals or hypothesis tests, then we use the normal assumption. But, we could instead construct confidence intervals by some other means, such as bootstrapping. Then, we do not use the normal assumption, but, alas, without that, it could be we should use some other estimators than the least squares ones, maybe some robust estimators? In practice, of course, the normal distribution is at most a convenient fiction. So, the really important question is, how close to normality do we need to be to claim to use the results referred to above? That is a much trickier question! Optimality results are not robust , so even a very small deviation from normality might destroy optimality. That is an argument in favour of robust methods. For another tack at that question, see my answer to Why should we use t errors instead of normal errors? Another relevant question is Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line? EDIT This answer led to a large discussion-in-comments, which again led to my new question: Linear regression: any non-normal distribution giving identity of OLS and MLE? which now finally got (three) answers, giving examples where non-normal distributions lead to least squares estimators. | {
"source": [
"https://stats.stackexchange.com/questions/148803",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12492/"
]
} |
151,216 | Bottom line , the more I learn about statistics, the less I trust published papers in my field; I simply believe that researchers are not doing their statistics well enough. I'm a layman, so to speak. I'm trained in biology but I have no formal education in statistics or mathematics. I enjoy R and often make an effort to read (and understand...) some of the theoretical foundations of the methods that I apply when doing research. It wouldn't surprise me if the majority of people doing analyses today are actually not formally trained. I've published around 20 original papers, some of which have been accepted by recognized journals and statisticians have frequently been involved in the review-process. My analyses commonly include survival analysis, linear regression, logistic regression, mixed models. Never ever has a reviewer asked about model assumptions, fit or evaluation. Thus, I never really bothered too much about model assumptions, fit and evaluation. I start with a hypothesis, execute the regression and then present the results. In some instances I made an effort to evaluate these things, but I always ended up with " well it didn't fulfill all assumptions, but I trust the results ("subject matter knowledge") and they are plausible, so it's fine " and when consulting a statistician they always seemed to agree. Now, I've spoken to other statisticians and non-statisticians (chemists, physicians and biologists) who perform analyses themselves; it seems that people don't really bother too much about all these assumptions and formal evaluations. But here on CV, there is an abundance of people asking about residuals, model fit, ways to evaluate it, eigenvalues, vectors and the list goes on. Let me put it this way, when lme4 warns about large eigenvalues, I really doubt that many of its users care to address that... Is it worth the extra effort? Is it not likely that the majority of all published results do not respect these assumptions and perhaps have not even assessed them? This is probably a growing issue since databases grow larger every day and there is a notion that the bigger the data, the less important is the assumptions and evaluations. I could be absolutely wrong, but this is how I have perceived this. Update: Citation borrowed from StasK (below): http://www.nature.com/news/science-joins-push-to-screen-statistics-in-papers-1.15509 | I am trained as a statistician not as a biologist or medical doctor. But I do quite a bit of medical research (working with biologists and medical doctors), as part of my research I have learned quite a bit about treatment of several different diseases. Does this mean that if a friend asks me about a disease that I have researched that I can just write them a prescription for a medication that I know is commonly used for that particular disease? If I were to do this (I don't), then in many cases it would probably work out OK (since a medical doctor would just have prescribed the same medication), but there is always a possibility that they have an allergy/drug interaction/other that a doctor would know to ask about, that I do not and end up causing much more harm than good. If you are doing statistics without understanding what you are assuming and what could go wrong (or consulting with a statistician along the way that will look for these things) then you are practicing statistical malpractice. Most of the time it will probably be OK, but what about the occasion where an important assumption does not hold, but you just ignore it? I work with some doctors who are reasonably statistically competent and can do much of their own analysis, but they will still run it past me. Often I confirm that they did the correct thing and that they can do the analysis themselves (and they are generally grateful for the confirmation) but occasionally they will be doing something more complex and when I mention a better approach they will usually turn the analysis over to me or my team, or at least bring me in for a more active role. So my answer to your title question is "No" we are not exaggerating, rather we should be stressing some things more so that laymen will be more likely to at least double check their procedures/results with a statistician. Edit This is an addition based on Adam's comment below (will be a bit long for another comment). Adam, Thanks for your comment. The short answer is "I don't know". I think that progress is being made in improving the statistical quality of articles, but things have moved so quickly in many different ways that it will take a while to catch up and guarentee the quality. Part of the solution is focusing on the assumptions and the consequences of the violations in intro stats courses. This is more likely to happen when the classes are taught by statisticians, but needs to happen in all classes. Some journals are doing better, but I would like to see a specific statistician reviewer become the standard. There was an article a few years back (sorry don't have the reference handy, but it was in either JAMA or the New England Journal of Medicine) that showed a higher probability of being published (though not as big a difference as it should be) in JAMA or NEJM if a biostatistican or epidemiologist was one of the co-authors. An interesting article that came out recently is: http://www.nature.com/news/statistics-p-values-are-just-the-tip-of-the-iceberg-1.17412 which discusses some of the same issues. | {
"source": [
"https://stats.stackexchange.com/questions/151216",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/35413/"
]
} |
151,304 | Ridge regression coefficient estimate $\hat{\beta}^R$ are the values that minimize the $$ \text{RSS} + \lambda \sum_{j=1}^p\beta_j^2. $$ My questions are: If $\lambda = 0$, then we see that the expression above reduces to the usual RSS. What if $\lambda \to \infty$? I do not understand the textbook explanation of the behaviour of the coefficients. To aid in understanding the concept behind a particular term, why is the term called RIDGE Regression? (Why ridge?) And what could have been wrong with the usual/common regression that there is a need to introduce a new concept called ridge regression? Your insights would be great. | Since you ask for insights , I'm going to take a fairly intuitive approach rather than a more mathematical tack: Following the concepts in my answer here , we can formulate a ridge regression as a regression with dummy data by adding $p$ (in your formulation) observations, where $y_{n+j}=0$ , $x_{j,n+j}=\sqrt{\lambda}$ and $x_{i,n+j}=0$ for $i\neq j$ . If you write out the new RSS for this expanded data set, you'll see the additional observations each add a term of the form $(0-\sqrt{\lambda}\beta_j)^2=\lambda\beta_j^2$ , so the new RSS is the original $\text{RSS} + \lambda \sum_{j=1}^p\beta_j^2$ -- and minimizing the RSS on this new, expanded data set is the same as minimizing the ridge regression criterion. So what can we see here? As $\lambda$ increases, the additional $x$ -rows each have one component that increases, and so the influence of these points also increases. They pull the fitted hyperplane toward themselves. Then as $\lambda$ and the corresponding components of the $x$ 's go off to infinity, all the involved coefficients "flatten out" to $0$ . That is, as $\lambda\to\infty$ , the penalty will dominate the minimization, so the $\beta$ s will go to zero. If the intercept is not penalized (the usual case) then the model shrinks more and more toward the mean of the response. I'll give an intuitive sense of why we're talking about ridges first (which also suggests why it's needed), then tackle a little history. The first is adapted from my answer here : If there's multicollinearity, you get a "ridge" in the likelihood function (likelihood is a function of the $\beta$ 's). This in turn yields a long "valley" in the RSS (since RSS= $-2\log\mathcal{L}$ ). Ridge regression "fixes" the ridge - it adds a penalty that turns the ridge into a nice peak in likelihood space, equivalently a nice depression in the criterion we're minimizing: [ Clearer image ] The actual story behind the name is a little more complicated. In 1959 A.E. Hoerl [1] introduced ridge analysis for response surface methodology, and it very soon [2] became adapted to dealing with multicollinearity in regression ('ridge regression'). See for example, the discussion by R.W. Hoerl in [3], where it describes Hoerl's (A.E. not R.W.) use of contour plots of the response surface* in the identification of where to head to find local optima (where one 'heads up the ridge'). In ill-conditioned problems, the issue of a very long ridge arises, and insights and methodology from ridge analysis are adapted to the related issue with the likelihood/RSS in regression, producing ridge regression. * examples of response surface contour plots (in the case of quadratic response) can be seen here (Fig 3.9-3.12). That is, "ridge" actually refers to the characteristics of the function we were attempting to optimize, rather than to adding a "ridge" (+ve diagonal) to the $X^TX$ matrix (so while ridge regression does add to the diagonal, that's not why we call it 'ridge' regression). For some additional information on the need for ridge regression, see the first link under list item 2. above. References: [1]: Hoerl, A.E. (1959). Optimum solution of many variables equations. Chemical Engineering Progress , 55 (11) 69-78. [2]: Hoerl, A.E. (1962). Applications of ridge analysis to regression problems. Chemical Engineering Progress , 58 (3) 54-59. [3] Hoerl, R.W. (1985). Ridge Analysis 25 Years Later. American Statistician , 39 (3), 186-192 | {
"source": [
"https://stats.stackexchange.com/questions/151304",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/67413/"
]
} |
152,517 | I have been reading The Elements of Statistical Learning and I could not understand what Section 3.7 "Multiple outcome shrinkage and selection" is all about. It talks about RRR (reduced-rank regression), and I can only understand that the premise is about a generalized multivariate linear model where the coefficients are unknown (and is to be estimated) but is known not to have full rank. That's the only thing I understand. The rest of the mathematics is beyond me. It doesn't even help that the authors say 'one can show' and leaves things as an exercise. Can someone please help explain what is happening here, intuitively? Is this chapter supposedly discussing new methods? or what? | 1. What is reduced-rank regression (RRR)? Consider multivariate multiple linear regression, i.e. regression with $p$ independent variables and $q$ dependent variables. Let $\mathbf X$ and $\mathbf Y$ be centered predictor ($n \times p$) and response ($n\times q$) datasets. Then usual ordinary least squares (OLS) regression can be formulated as minimizing the following cost function: $$L=\|\mathbf Y-\mathbf X\mathbf B\|^2,$$ where $\mathbf B$ is a $p\times q$ matrix of regression weights. Its solution is given by $$\hat{\mathbf B}_\mathrm{OLS}=(\mathbf X^\top \mathbf X)^{-1}\mathbf X^\top \mathbf Y,$$ and it is easy to see that it is equivalent to doing $q$ separate OLS regressions, one for each dependent variable. Reduced-rank regression introduces a rank constraint on $\mathbf B$, namely $L$ should be minimized with $\operatorname{rank}(\mathbf B)\le r$, where $r$ is the maximal allowed rank of $\mathbf B$. 2. How to obtain the RRR solution? It turns out that RRR can be cast as an eigenvector problem. Indeed, using the fact that OLS is essentially orthogonal projection on the column space of $\mathbf X$, we can rewrite $L$ as $$L=\|\mathbf Y-\mathbf X\hat{\mathbf B}_\mathrm{OLS}\|^2+\|\mathbf X\hat{\mathbf B}_\mathrm{OLS}-\mathbf X\mathbf B\|^2.$$ The first term does not depend on $\mathbf B$ and the second term can be minimized by SVD/PCA of the fitted values $\hat{\mathbf Y}=\mathbf X\hat{\mathbf B}_\mathrm{OLS}$. Specifically, if $\mathbf U_r$ are first $r$ principal axes of $\hat{\mathbf Y}$, then $$\hat{\mathbf B}_\mathrm{RRR}=\hat{\mathbf B}_\mathrm{OLS}\mathbf U_r\mathbf U_r^\top.$$ 3. What is RRR good for? There can be two reasons to use RRR. First, one can use it for regularization purposes. Similarly to ridge regression (RR), lasso, etc., RRR introduces some "shrinkage" penalty on $\mathbf B$. The optimal rank $r$ can be found via cross-validation. In my experience, RRR easily outperforms OLS but tends to lose to RR. However, RRR+RR can perform (slightly) better than RR alone. Second, one can use it as a dimensionality reduction / data exploration method. If we have a bunch of predictor variables and a bunch of dependent variables, then RRR will construct "latent factors" in the predictor space that do the best job of explaining the variance of DVs. One can then try to interpret these latent factors, plot them, etc. As far as I know, this is routinely done in ecology where RRR is known as redundancy analysis and is an example of what they call ordination methods ( see @GavinSimpson's answer here ). 4. Relationship to other dimensionality reduction methods RRR is closely connected to other dimensionality reduction methods, such as CCA and PLS. I covered it a little bit in my answer to What is the connection between partial least squares, reduced rank regression, and principal component regression? if $\mathbf X$ and $\mathbf Y$ are centered predictor ($n \times p$) and response ($n\times q$) datasets and if we look for the first pair of axes, $\mathbf w \in \mathbb R^p$ for $\mathbf X$ and $\mathbf v \in \mathbb R^q$ for $\mathbf Y$, then these methods maximize the following quantities: \begin{align}
\mathrm{PCA:}&\quad \operatorname{Var}(\mathbf{Xw}) \\
\mathrm{RRR:}&\quad \phantom{\operatorname{Var}(\mathbf {Xw})\cdot{}}\operatorname{Corr}^2(\mathbf{Xw},\mathbf {Yv})\cdot\operatorname{Var}(\mathbf{Yv}) \\
\mathrm{PLS:}&\quad \operatorname{Var}(\mathbf{Xw})\cdot\operatorname{Corr}^2(\mathbf{Xw},\mathbf {Yv})\cdot\operatorname{Var}(\mathbf {Yv}) = \operatorname{Cov}^2(\mathbf{Xw},\mathbf {Yv})\\
\mathrm{CCA:}&\quad \phantom{\operatorname{Var}(\mathbf {Xw})\cdot {}}\operatorname{Corr}^2(\mathbf {Xw},\mathbf {Yv})
\end{align} See there for some more details. See Torre, 2009, A Least-Squares Framework for Component Analysis for a detailed treatment of how most of the common linear multivariate methods (e.g. PCA, CCA, LDA, -- but not PLS!) can be seen as RRR. 5. Why is this section in Hastie et al. so confusing? Hastie et al. use the term RRR to refer to a slightly different thing! Instead of using the loss function $$L=\|\mathbf Y-\mathbf X \mathbf B\|^2,$$ they use $$L=\|(\mathbf Y-\mathbf X \mathbf B)(\mathbf Y^\top \mathbf Y)^{-1/2}\|^2,$$ as can be seen in their formula 3.68. This introduces a $\mathbf Y$-whitening factor into the loss function, essentially whitening the dependent variables. If you look at the comparison between CCA and RRR above, you will notice that if $\mathbf Y$ is whitened then the difference disappears. So what Hastie et al. call RRR is actually CCA in disguise (and indeed, see their 3.69). None of that is properly explained in this section, hence the confusion. See my answer to Friendly tutorial or introduction to reduced-rank regression for further reading. | {
"source": [
"https://stats.stackexchange.com/questions/152517",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/67413/"
]
} |
152,558 | The lm function in R can print out the estimated covariance of regression coefficients. What does this information give us? Can we now interpret the model better or diagnose issues that might be present in the model? | The most basic use of the covariance matrix is to obtain the standard errors of regression estimates. If the researcher is only interested in the standard errors of the individual regression parameters themselves, they can just take the square root of the diagonal to get the individual standard errors. However, often times you may be interested in a linear combination of regression parameters. For example, if you have a indicator variable for a given group, you may be interested in the group mean, which would be $\beta_0 + \beta_{\rm grp}$. Then, to find the standard error for that group's estimated mean, you would have $\sqrt{X^\top S X}$, where $X$ is a vector of your contrasts and $S$ is the covariance matrix. In our case, if we only have the addition covariate "grp", then $X = (1,1)$ ($1$ for the intercept, $1$ for belonging to the group). Furthermore, the covariance matrix (or more over, the correlation matrix, which is uniquely identified from the covariance matrix but not vice versa) can be very useful for certain model diagnostics. If two variables are highly correlated, one way to think about it is that the model is having trouble figuring out which variable is responsible for an effect (because they are so closely related). This can be helpful for a whole variety of cases, such as choosing subsets of covariates to use in a predictive model; if two variables are highly correlated, you may only want to use one of the two in your predictive model. | {
"source": [
"https://stats.stackexchange.com/questions/152558",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/77268/"
]
} |
152,674 | Gelman and Hill (2006) write on p46 that: The regression assumption that is generally least important is that
the errors are normally distributed. In fact, for the purpose of
estimating the regression line (as compared to predicting individual
data points), the assumption of normality is barely important at all.
Thus, in contrast to many regression textbooks, we do not recommend
diagnostics of the normality of regression residuals. Gelman and Hill don't seem to explain this point any further. Are Gelman and Hill correct? If so, then: Why "barely important at all"? Why is it neither important nor completely irrelevant? Why is the normality of residuals important when predicting individual data points? Gelman, A., & Hill, J. (2006). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press | For estimation normality isn't exactly an assumption, but a major consideration would be efficiency; in many cases a good linear estimator will do fine and in that case (by Gauss-Markov) the LS estimate would be the best of those things-that-would-be-okay. (If your tails are quite heavy, or very light, it may make sense to consider something else) In the case of tests and CIs, while normality is assumed, it's usually not all that critical (again, as long as tails are not really heavy or light, or perhaps one of each), in that, at least in not-very-small samples the tests and typical CIs tend to have close to their nominal properties (not-too-far from claimed significance level or coverage) and perform well (reasonable power for typical situations or CIs not too much wider than alternatives) - as you move further from the normal case power can be more of an issue, and in that case large samples won't generally improve relative efficiency, so where effect sizes are such that power is middling in a test with relatively good power, it may be very poor for the tests which assume normality. This tendency to have close to the nominal properties for CIs and significance levels in tests is because of several factors operating together (one of which is the tendency of linear combinations of variables to have close to normal distribution as long as there's lots of values involved and none of them contribute a large fraction of the total variance). However, in the case of a prediction interval based on the normal assumption, normality is relatively more critical, since the width of the interval is strongly dependent on the distribution of a single value. However, even there, for the most common interval size (95% interval), the fact that many unimodal distributions have very close to 95% of their distribution within about 2sds of the mean tends to result in reasonable performance of a normal prediction interval even when the distribution isn't normal. [This doesn't carry over quite so well to much narrower or wider intervals -- say a 50% interval or a 99.9% interval -- though.] | {
"source": [
"https://stats.stackexchange.com/questions/152674",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/9162/"
]
} |
152,761 | Does this discrete distribution have a name? For $i \in 1...N$ $f(i) = \frac{1}{N} \sum_{j = i}^N \frac{1}{j}$ I came across this distribution from the following: I have a list of $N$ items ranked by some utility function. I want to randomly select one of the items, biasing toward the start of the list. So, I first choose an index $j$ between 1 and $N$ uniformly. I then select an item between indices 1 and $j$. I believe this process results in the above distribution. | You have a discretized version of the negative log distribution, that is, the distribution whose support is $[0, 1]$ and whose pdf is $f(t) = - \log t$. To see this, I'm going to redefine your random variable to take values in the set $\{ 0, 1/N, 2/N, \ldots, 1 \}$ instead of $\{0, 1, 2, \ldots, N \}$ and call the resulting distribution $T$. Then, my claim is that $$ Pr\left( T = \frac{t}{N} \right) \rightarrow - \frac{1}{N} \log \left( \frac{t}{N} \right) $$ as $N, t \rightarrow \infty$ while $\frac{t}{N}$ is held (approximately) constant. First, a little simulation experiment demonstrating this convergence. Here's a small implementation of a sampler from your distribution: t_sample <- function(N, size) {
bounds <- sample(1:N, size=size, replace=TRUE)
samples <- sapply(bounds, function(t) {sample(1:t, size=1)})
samples / N
} Here's a histogram of a large sample taken from your distribution: ss <- t_sample(100, 200000)
hist(ss, freq=FALSE, breaks=50) and here's the logarithmic pdf overlaid: linsp <- 1:100 / 100
lines(linsp, -log(linsp)) To see why this convergence occurs, start with your expression $$ Pr \left( T = \frac{t}{N} \right) = \frac{1}{N} \sum_{j=t}^N \frac{1}{j} $$ and multiply and divide by $N$ $$ Pr \left( T = \frac{t}{N} \right) = \frac{1}{N} \sum_{j=t}^N \frac{N}{j} \frac{1}{N} $$ The summation is now a Riemann sum for the function $g(x) = \frac{1}{x}$, integrated from $\frac{t}{N}$ to $1$. That is, for large $N$, $$ Pr \left( T = \frac{t}{N} \right) \approx \frac{1}{N} \int_{\frac{t}{N}}^1 \frac{1}{x} dx = - \frac{1}{N} \log \left( \frac{t}{N} \right)$$ which is the expression I wanted to arrive at. | {
"source": [
"https://stats.stackexchange.com/questions/152761",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/77383/"
]
} |
152,882 | I'm trying to understand the bias-variance tradeoff, the relationship between the bias of the estimator and the bias of the model, and the relationship between the variance of the estimator and the variance of the model. I came to these conclusions: We tend to overfit the data when we neglect the bias of the estimator, that is when we only aim to minimize the bias of the model neglecting the variance of the model (in other words we only aim to minimize the variance of the estimator without considering the bias of the estimator too) Vice versa, we tend to underfit the data when we neglect the variance of the estimator, that is when we only aim to minimize the variance of the model neglecting the bias of the model (in other words we only aim to minimize the bias of the estimator without considering the variance of the estimator too). Are my conclusions correct? | Well, sort of. As stated, you ascribe intent to the scientist to minimize either bias or variance. In practice, you cannot explicitly observe the bias or the variance of your model (if you could, then you would know the true signal, in which case you wouldn't need a model). In general, you can only observe the error rate of your model on a specific data set, and you seek to estimate the out of sample error rate using various creative techniques. Now you do know that, theoretically at least, this error rate can be decomposed into bias and variance terms, but you cannot directly observe this balance in any specific concrete situation. So I'd restate your observations slightly as: A model is underfit to the data when the bias term contributes the majority of out of sample error. A model is overfit to the data when the variance term contributes the majority of out of sample error. In general, there is no real way to know for sure, as you can never truly observe the model bias. Nonetheless, there are various patterns of behavior that are indicative of being in one situation or another: Overfit models tend to have much worse goodness of fit performance on a testing dataset vs. a training data set. Underfit models tend to have the similar goodness of fit performance on a testing vs. training data set. These are the patterns that are manifest in the famous plots of error rates by model complexity, this one is from The Elements of Statistical Learning: Oftentimes these plots are overlaid with a bias and variance curve. I took this one from this nice exposition : But, it is very important to realize that you never actually get to see these additional curves in any realistic situation. | {
"source": [
"https://stats.stackexchange.com/questions/152882",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/66293/"
]
} |
152,897 | Many machine learning classifiers (e.g. support vector machines) allow one to specify a kernel. What would be an intuitive way of explaining what a kernel is? One aspect I have been thinking of is the distinction between linear and non-linear kernels. In simple terms, I could speak of 'linear decision functions' an 'non-linear decision functions'. However, I am not sure if calling a kernel a 'decision function' is a good idea. Suggestions? | Kernel is a way of computing the dot product of two vectors $\mathbf x$ and $\mathbf y$ in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called "generalized dot product". Suppose we have a mapping $\varphi \, : \, \mathbb R^n \to \mathbb R^m$ that brings our vectors in $\mathbb R^n$ to some feature space $\mathbb R^m$. Then the dot product of $\mathbf x$ and $\mathbf y$ in this space is $\varphi(\mathbf x)^T \varphi(\mathbf y)$. A kernel is a function $k$ that corresponds to this dot product, i.e. $k(\mathbf x, \mathbf y) = \varphi(\mathbf x)^T \varphi(\mathbf y)$. Why is this useful? Kernels give a way to compute dot products in some feature space without even knowing what this space is and what is $\varphi$. For example, consider a simple polynomial kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2$ with $\mathbf x, \mathbf y \in \mathbb R^2$. This doesn't seem to correspond to any mapping function $\varphi$, it's just a function that returns a real number. Assuming that $\mathbf x = (x_1, x_2)$ and $\mathbf y = (y_1, y_2)$, let's expand this expression: $\begin{align}
k(\mathbf x, \mathbf y) & = (1 + \mathbf x^T \mathbf y)^2 = (1 + x_1 \, y_1 + x_2 \, y_2)^2 = \\
& = 1 + x_1^2 y_1^2 + x_2^2 y_2^2 + 2 x_1 y_1 + 2 x_2 y_2 + 2 x_1 x_2 y_1 y_2
\end{align}$ Note that this is nothing else but a dot product between two vectors $(1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$ and $(1, y_1^2, y_2^2, \sqrt{2} y_1, \sqrt{2} y_2, \sqrt{2} y_1 y_2)$, and $\varphi(\mathbf x) = \varphi(x_1, x_2) = (1, x_1^2, x_2^2, \sqrt{2} x_1, \sqrt{2} x_2, \sqrt{2} x_1 x_2)$. So the kernel $k(\mathbf x, \mathbf y) = (1 + \mathbf x^T \mathbf y)^2 = \varphi(\mathbf x)^T \varphi(\mathbf y)$ computes a dot product in 6-dimensional space without explicitly visiting this space. Another example is Gaussian kernel $k(\mathbf x, \mathbf y) = \exp\big(- \gamma \, \|\mathbf x - \mathbf y\|^2 \big)$. If we Taylor-expand this function, we'll see that it corresponds to an infinite-dimensional codomain of $\varphi$. Finally, I'd recommend an online course "Learning from Data" by Professor Yaser Abu-Mostafa as a good introduction to kernel-based methods. Specifically, lectures "Support Vector Machines" , "Kernel Methods" and "Radial Basis Functions" are about kernels. | {
"source": [
"https://stats.stackexchange.com/questions/152897",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/77476/"
]
} |
152,907 | In some lectures and tutorials I've seen, they suggest to split your data into three parts: training, validation and test. But it is not clear how the test dataset should be used, nor how this approach is better than cross-validation over the whole data set. Let's say we have saved 20% of our data as a test set. Then we take the rest, split it into k folds and, using cross-validation, we find the model that makes the best prediction on unknown data from this dataset. Let's say the best model we have found gives us 75% accuracy. Various tutorials and lots of questions on various Q&A websites say that now we can verify our model on a saved (test) dataset. But I still can't get how exactly is it done, nor what is the point of it. Let's say we've got an accuracy of 70% on the test dataset. So what do we do next? Do we try another model, and then another, until we will get a high score on our test dataset? But in this case it really looks like we will just find the model that fits our limited (only 20%) test set . It doesn't mean that we will find the model that is best in general. Moreover, how can we consider this score as a general evaluation of the model, if it is only calculated on a limited data set? If this score is low, maybe we were unlucky and selected "bad" test data. On the other hand, if we use all the data we have and then choose the model using k-fold cross-validation, we will find the model that makes the best prediction on unknown data from the entire data set we have. | This is similar to another question I answered regarding cross-validation and test sets . The key concept to understand here is independent datasets . Consider just two scenarios: If you have lot's of resources you would ideally collect one dataset and train your model via cross-validation. Then you would collect another completely independent dataset and test your model. However, as I said previously, this is usually not possible for many researchers. Now, if I am a researcher who isn't so fortunate what do I do? Well, you can try to mimic that exact scenario: Before you do any model training you would take a split of your data and leave it to the side ( never to be touched during cross-validation ). This is to simulate that very same independent dataset mentioned in the ideal scenario above. Even though it comes from the same dataset the model training won't take any information from those samples (where with cross-validation all the data is used). Once you have trained your model you would then apply it to your test set, again that was never seen during training, and get your results. This is done to make sure your model is more generalizable and hasn't just learned your data. To address your other concerns: Let's say we've got an accuracy of 70% on test data set, so what do we do next? Do we try an other model, and then an other, untill we will get hight score on our test data set? Sort of, the idea is that you are creating the best model you can from your data and then evaluating it on some more data it has never seen before. You can re-evaluate your cross-validation scheme but once you have a tuned model (i.e. hyper parameters) you are moving forward with that model because it was the best you could make. The key is to NEVER USE YOUR TEST DATA FOR TUNING . Your result from the test data is your model's performance on 'general' data. Replicating this process would remove the independence of the datasets (which was the entire point). This is also address in another question on test/validation data . And also, how can we consider this score as general evaluation of the model, if it is calculated on a limited data set? If this score is low, maybe we were unlucky to select "bad" test data. This is unlikely if you have split your data correctly. You should be splitting your data randomly (although potentially stratified for class balancing). If you dataset is large enough that you are splitting your data in to three parts, your test subset should be large enough that the chance is very low that you just chose bad data. It is more likely that your model has been overfit. | {
"source": [
"https://stats.stackexchange.com/questions/152907",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/60313/"
]
} |
153,069 | A common approach to text classification is to train a classifier off of a 'bag-of-words'. The user takes the text to be classified and counts the frequencies of the words in each object, followed by some sort of trimming to keep the resulting matrix of a manageable size. Often, I see users construct their feature vector using TFIDF. In other words, the text frequencies noted above are down-weighted by the frequency of the words in corpus. I see why TFIDF would be useful for selecting the 'most distinguishing' words of a given document for, say, display to a human analyst. But in the case of text categorization using standard supervised ML techniques, why bother downweighting by the frequency of documents in the corpus? Will not the learner itself decide the importance to assign to each word/combination of words? I'd be grateful for your thoughts on what value the IDF adds, if any. | You're correct that the supervised learner can often be redundant with TF-IDF weighting. Here's the basic outline of why: In one typical form of TF-IDF weighting, the rescaling is logarithmic, so the weighting for a word $w$ in a document $d$ is $$
\text{TF-IDF}(w,d) = (\text{no. occurrences of $w$ in $d$}) \cdot f(w)
$$ for $N$ the number of documents in the corpus and $f(w)=\log\left(\frac{N}{\text{no. documents containing $w$}}\right)$ . When $f(w)>0$ , TF-IDF just amounts to a rescaling of the term frequency. So if we write the matrix counting the number of occurrences of a word in each document as $X$ , then a linear model has the form $X\beta$ . If we use TF-IDF instead of just term frequency alone, the linear model can be written as $X(k I)\tilde{\beta}$ , where $k$ is a vector storing all of our weights $k_i=f(w_i)$ . The effect of $kI$ is to rescale each column of $X$ . In this setting, the choice to use TF-IDF or TF alone is inconsequential, because you'll get the same predictions. Using the substitution $(kI)\tilde{\beta}=\beta$ , we can see the effect is to rescale $\beta$ . But there are at least two scenarios where the choice to use TF-IDF is consequential for supervised learning. The first case is when $f(w)=0$ . This happens whenever a term occurs in every document, such as very common words like "and" or "the." In this case, TF-IDF will zero out the column in $X(kI)$ , resulting in a matrix which is not full-rank. A rank-deficient matrix is often not preferred for supervised learning, so instead these words are simply dropped from $X$ because they add no information. In this way, TF-IDF provides automatic screening for the most common words. The second case is when the matrix $X(kI)$ has its document vectors rescaled to the same norm. Since a longer document is very likely to have a much larger vocabulary than a shorter document, it can be hard to compare documents of different lengths. Rescaling each document vector will also suppress importance rare words in the document independently of how rare or common the word is in the corpus. Moreover, rescaling each document's vector to have the same norm after computing TF-IDF gives a design matrix which is not a linear transformation of $X$ , so original matrix cannot be recovered using a linear scaling. Rescaling the document vectors has a close connection to cosine similarity, since both methods involve comparing unit-length vectors. The popularity of TF-IDF in some settings does not necessarily impose a limitation on the methods you use. Recently, it has become very common to use word and token vectors that are either pre-trained on a large corpus or trained by the researcher for their particular task. Depending on what you're doing and scale of the data, and the goal of your analysis, it might be more expedient to use TD-IDF, word2vec, or another method to represent natural language information. A number of resources can be found here , which I reproduce for convenience. K. Sparck Jones. "A statistical interpretation of term specificity
and its application in retrieval". Journal of Documentation, 28 (1).
1972. G. Salton and Edward Fox and Wu Harry Wu. "Extended Boolean
information retrieval". Communications of the ACM, 26 (11). 1983. G. Salton and M. J. McGill. "Introduction to modern information
retrieval". 1983 G. Salton and C. Buckley. "Term-weighting approaches in automatic
text retrieval". Information Processing & Management, 24 (5). 1988. H. Wu and R. Luk and K. Wong and K. Kwok. "Interpreting TF-IDF term
weights as making relevance decisions". ACM Transactions on
Information Systems, 26 (3). 2008. | {
"source": [
"https://stats.stackexchange.com/questions/153069",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/52971/"
]
} |
153,526 | I frequently see both the spellings "heteroskedastic" and "heteroscedastic", and similarly for "homoscedastic" and "homoskedastic". There seems to be no difference in meaning between the "c" and the "k" variants, simply an orthographic difference related to the Greek etymology of the word. What are the origins of the two distinct spellings? Is one usage more common than the other, and do they reflect variation between regions or research fields, or nothing more than authorial (or indeed, editorial) preference? As an aside, other languages have different policies on latinizing Greek roots to English: I note that in French it is, I think, always "hétéroscédasticité" whereas in German it is always "Heteroskedastizität". So I would not be surprised if authors with English as a second language may have a preference for the English spelling corresponding to their mother tongue's. Perhaps the real test is what Greek statisticians call it when writing in English! | Inside this small and vexed question even smaller questions are struggling to get out. The most detailed discussion to date appears to be Alfredo R. Paloyo. 2011. When did we begin to spell “heteros*edasticity” correctly? Ruhr Economic Papers 0300. see here (a reference I owe to @Andy here in Ten fold chat). I can't do justice to its dense and detailed discussion. What follows is more by nature of an executive summary, modulo a little whimsy. Modern search facilities make it possible to be confident that homoscedastic(ity) and heteroscedastic(ity) are modern coinages introduced, explicitly or implicitly, by the British statistician Karl Pearson in 1905. (Pearson ranged widely over several disciplines, but in the second half of his life his work was firmly centred on statistics.) Modifying c to k raises absolutely no statistical issue. The idea is at its simplest that the Greek root being used includes the letter kappa ($\kappa$), whose direct equivalent in English is k , and so that k is the correct spelling. However, as others have done elsewhere, we note that this suggestion was made particularly by J.H. McCulloch in the journal Econometrica , a journal which failed to follow the same logic by renaming itself Econometrika , nay Ekonometrika . (The roots behind "economics" are also Greek, including the word oikos . Ecologists will want to add that there is a journal Oikos even though, once again, ecology did not call itself oikology .) Further, it is remarkable that Karl Pearson was no hater of k , as he changed his own name from Carl to Karl and named his own journal Biometrika , in full and conscious recognition of the original Greek words he used when devising that name. The root question then is purely one of language, and of how faithful it is proper to be to the original words behind a coinage. If you follow up the McCulloch reference, the discussion turns to whether such words came into English directly or via other languages, and so hinges on criteria that may appear to many readers as arbitrary if not arcane. (Note that criteria is another word of Greek origin that escaped the k treatment.) Most language authorities now acknowledge that present spelling can owe much to historical accidents and that any long-established usage eventually can over-turn logic (or more precisely etymology). In total, there is plenty of scope here for scepticism (or skepticism). In terms of tribal or other preferences, it is my impression that Econometric usage seems to be shifting towards the k form. The McCulloch paper had an effect, indirectly if not directly. British English seems to make more use of c forms over k forms than does American English. The form sceptic is standard in British spelling, for example. All puns and wordplay here should be considered intentional even when accidental. | {
"source": [
"https://stats.stackexchange.com/questions/153526",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/22228/"
]
} |
153,531 | I'm using Python Keras package for neural network. This is the link . Is batch_size equals to number of test samples? From Wikipedia we have this information: However, in other cases, evaluating the sum-gradient may require
expensive evaluations of the gradients from all summand functions.
When the training set is enormous and no simple formulas exist,
evaluating the sums of gradients becomes very expensive, because
evaluating the gradient requires evaluating all the summand functions'
gradients. To economize on the computational cost at every iteration,
stochastic gradient descent samples a subset of summand functions at
every step. This is very effective in the case of large-scale machine
learning problems. Above information is describing test data? Is this same as batch_size in keras (Number of samples per gradient update)? | The batch size defines the number of samples that will be propagated through the network. For instance, let's say you have 1050 training samples and you want to set up a batch_size equal to 100. The algorithm takes the first 100 samples (from 1st to 100th) from the training dataset and trains the network. Next, it takes the second 100 samples (from 101st to 200th) and trains the network again. We can keep doing this procedure until we have propagated all samples through of the network. Problem might happen with the last set of samples. In our example, we've used 1050 which is not divisible by 100 without remainder. The simplest solution is just to get the final 50 samples and train the network. Advantages of using a batch size < number of all samples: It requires less memory. Since you train the network using fewer samples, the overall training procedure requires less memory. That's especially important if you are not able to fit the whole dataset in your machine's memory. Typically networks train faster with mini-batches. That's because we update the weights after each propagation. In our example we've propagated 11 batches (10 of them had 100 samples and 1 had 50 samples) and after each of them we've updated our network's parameters. If we used all samples during propagation we would make only 1 update for the network's parameter. Disadvantages of using a batch size < number of all samples: The smaller the batch the less accurate the estimate of the gradient will be. In the figure below, you can see that the direction of the mini-batch gradient (green color) fluctuates much more in comparison to the direction of the full batch gradient (blue color). Stochastic is just a mini-batch with batch_size equal to 1. In that case, the gradient changes its direction even more often than a mini-batch gradient. | {
"source": [
"https://stats.stackexchange.com/questions/153531",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/43534/"
]
} |
153,599 | There are Recurrent Neural Networks and Recursive Neural Networks. Both are usually denoted by the same acronym: RNN. According to Wikipedia , Recurrent NN are in fact Recursive NN, but I don't really understand the explanation. Moreover, I don't seem to find which is better (with examples or so) for Natural Language Processing. The fact is that, although Socher uses Recursive NN for NLP in his tutorial , I can't find a good implementation of recursive neural networks, and when I search in Google, most of the answers are about Recurrent NN. Besides that, is there another DNN which applies better for NLP, or it depends on the NLP task? Deep Belief Nets or Stacked Autoencoders? (I don't seem to find any particular util for ConvNets in NLP, and most of the implementations are with machine vision in mind). Finally, I would really prefer DNN implementations for C++ (better yet if it has GPU support) or Scala (better if it has Spark support) rather than Python or Matlab/Octave. I've tried Deeplearning4j, but it's under constant development and the documentation is a little outdated and I can't seem to make it work. Too bad because it has the "black box" like way of doing things, very much like scikit-learn or Weka, which is what I really want. | Recurrent Neural networks are recurring over time. For example if you have a sequence x = ['h', 'e', 'l', 'l'] This sequence is fed to a single neuron which has a single connection to itself. At time step 0, the letter 'h' is given as input.At time step 1, 'e' is given as input. The network when unfolded over time will look like this. A recursive network is just a generalization of a recurrent network. In a recurrent network the weights are shared (and dimensionality remains constant) along the length of the sequence because how would you deal with position-dependent weights when you encounter a sequence at test-time of different length to any you saw at train-time. In a recursive network the weights are shared (and dimensionality remains constant) at every node for the same reason. This means that all the W_xh weights will be equal(shared) and so will be the W_hh weight. This is simply because it is a single neuron which has been unfolded in time. This is what a Recursive Neural Network looks like. It is quite simple to see why it is called a Recursive Neural Network. Each parent node's children are simply a node similar to that node. The Neural network you want to use depends on your usage. In Karpathy's blog , he is generating characters one at a time so a recurrent neural network is good. But if you want to generate a parse tree, then using a Recursive Neural Network is better because it helps to create better hierarchical representations. If you want to do deep learning in c++, then use CUDA. It has a nice user-base, and is fast. I do not know more about that so cannot comment more. In python, Theano is the best option because it provides automatic differentiation, which means that when you are forming big, awkward NNs, you don't have to find gradients by hand. Theano does it automatically for you. This feature is lacked by Torch7. Theano is very fast as it provides C wrappers to python code and can be implemented on GPUs. It also has an awesome user base, which is very important while learning something new. | {
"source": [
"https://stats.stackexchange.com/questions/153599",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/70539/"
]
} |
153,937 | Is it possible to find the p-value in pearson correlation in R? To find the pearson correlation, I usually do this col1 = c(1,2,3,4)
col2 = c(1,4,3,5)
cor(col1,col2)
# [1] 0.8315218 But how I can find the p-value of this? | you can use cor.test : col1 = c(1,2,3,4)
col2 = c(1,4,3,5)
cor.test(col1,col2) which gives : # Pearson's product-moment correlation
# data: col1 and col2
# t = 2.117, df = 2, p-value = 0.1685
# alternative hypothesis: true correlation is not equal to 0
# 95 percent confidence interval:
# -0.6451325 0.9963561
# sample estimates:
# cor
# 0.8315218 More information about the statistics and extra parameters at the official page: https://stat.ethz.ch/R-manual/R-patched/library/stats/html/cor.test.html | {
"source": [
"https://stats.stackexchange.com/questions/153937",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/73863/"
]
} |
154,485 | As a prequel to a question about linear-mixed models in R, and to share as a reference for beginner/intermediate statistics aficionados, I decided to post as an independent "Q&A-style" the steps involved in the "manual" computation of the coefficients and predicted values of a simple linear regression. The example is with the R in-built dataset, mtcars , and would be set up as miles per gallon consumed by a vehicle acting as the independent variable, regressed over the weight of the car (continuous variable), and the number of cylinders as a factor with three levels (4, 6 or 8) without interactions. EDIT: If you are interested in this question, you will definitely find a detailed and satisfactory answer in this post by Matthew Drury outside CV . | Note : I've posted an expanded version of this answer on my website . Would you kindly consider posting a similar answer with the actual R engine exposed? Sure! Down the rabbit hole we go. The first layer is lm , the interface exposed to the R programmer. You can look at the source for this by just typing lm at the R console. The majority of it (like the majority of most production level code) is busy checking of inputs, setting of object attributes, and throwing of errors; but this line sticks out lm.fit(x, y, offset = offset, singular.ok = singular.ok,
...) lm.fit is another R function, you can call it yourself. While lm conveniently works with formulas and data frame, lm.fit wants matrices, so that's one level of abstraction removed. Checking the source for lm.fit , more busywork, and the following really interesting line z <- .Call(C_Cdqrls, x, y, tol, FALSE) Now we are getting somewhere. .Call is R's way of calling into C code. There is a C function, C_Cdqrls in the R source somewhere, and we need to find it. Here it is . Looking at the C function, again, we find mostly bounds checking, error cleanup, and busy work. But this line is different F77_CALL(dqrls)(REAL(qr), &n, &p, REAL(y), &ny, &rtol,
REAL(coefficients), REAL(residuals), REAL(effects),
&rank, INTEGER(pivot), REAL(qraux), work); So now we are on our third language, R has called C which is calling into fortran. Here's the fortran code . The first comment tells it all c dqrfit is a subroutine to compute least squares solutions
c to the system
c
c (1) x * b = y (interestingly, looks like the name of this routine was changed at some point, but someone forgot to update the comment). So we're finally at the point where we can do some linear algebra, and actually solve the system of equations. This is the sort of thing that fortran is really good at, which explains why we passed through so many layers to get here. The comment also explains what the code is going to do c on return
c
c x contains the output array from dqrdc2.
c namely the qr decomposition of x stored in
c compact form. So fortran is going to solve the system by finding the $QR$ decomposition. The first thing that happens, and by far the most important, is call dqrdc2(x,n,n,p,tol,k,qraux,jpvt,work) This calls the fortran function dqrdc2 on our input matrix x . Whats this? c dqrfit uses the linpack routines dqrdc and dqrsl. So we've finally made it to linpack . Linpack is a fortran linear algebra library that has been around since the 70s. Most serious linear algebra eventualy finds its way to linpack. In our case, we are using the function dqrdc2 c dqrdc2 uses householder transformations to compute the qr
c factorization of an n by p matrix x. This is where the actual work is done. It would take a good full day for me to figure out what this code is doing, it is as low level as they come. But generically, we have a matrix $X$ and we want to factor it into a product $X = QR$ where $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. This is a smart thing to do, because once you have $Q$ and $R$ you can solve the linear equations for regression $$ X^t X \beta = X^t Y $$ very easily. Indeed $$ X^t X = R^t Q^t Q R = R^t R $$ so the whole system becomes $$ R^t R \beta = R^t Q^t y $$ but $R$ is upper triangular and has the same rank as $X^t X$, so as long as our problem is well posed, it is full rank, and we may as well just solve the reduced system $$ R \beta = Q^t y $$ But here's the awesome thing. $R$ is upper triangular, so the last linear equation here is just constant * beta_n = constant , so solving for $\beta_n$ is trivial. You can then go up the rows, one by one, and substitute in the $\beta$s you already know, each time getting a simple one variable linear equation to solve. So, once you have $Q$ and $R$, the whole thing collapses to what is called backwards substitution , which is easy. You can read about this in more detail here , where an explicit small example is fully worked out. | {
"source": [
"https://stats.stackexchange.com/questions/154485",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/67822/"
]
} |
154,798 | What is the difference between the terms "kernel" and "filter" in the context of convolutional neural networks? | In the context of convolutional neural networks, kernel = filter = feature detector. Here is a great illustration from Stanford's deep learning tutorial (also nicely explained by Denny Britz ). The filter is the yellow sliding window, and its value is: \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 1
\end{bmatrix} | {
"source": [
"https://stats.stackexchange.com/questions/154798",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/28527/"
]
} |
154,830 | I'm doing nested cross-validation. I have read that leave-one-out cross-validation can be biased (don't remember why). Is it better to use 10-fold cross-validation or leave-one-out cross-validation apart from the longer runtime for leave-one-out cross-validation? | Just to add slightly to the answer of @SubravetiSuraj (+1) Cross-validation gives a pessimistically biased estimate of performance because most statistical models will improve if the training set is made larger. This means that k-fold cross-validation estimates the performance of a model trained on a dataset $100\times\frac{(k-1)}{k}\%$ of the available data, rather than on 100% of it. So if you perform cross-validation to estimate performance, and then use a model trained on all of the data for operational use, it will perform slightly better than the cross-validation estimate suggests. Leave-one-out cross-validation is approximately unbiased , because the difference in size between the training set used in each fold and the entire dataset is only a single pattern. There is a paper on this by Luntz and Brailovsky (in Russian). Luntz, Aleksandr, and Viktor Brailovsky. "On estimation of characters obtained in statistical procedure of recognition." Technicheskaya Kibernetica 3.6 (1969): 6–12. see also Lachenbruch,Peter A., and Mickey, M. Ray. " Estimation of Error Rates in Discriminant Analysis ." Technometrics 10.1 (1968): 1–11. However, while leave-one-out cross-validation is approximately unbiased, it tends to have a high variance (so you would get very different estimates if you repeated the estimate with different initial samples of data from the same distribution). As the error of the estimator is a combination of bias and variance, whether leave-one-out cross-validation is better than 10-fold cross-validation depends on both quantities. Now the variance in fitting the model tends to be higher if it is fitted to a small dataset (as it is more sensitive to any noise/sampling artifacts in the particular training sample used). This means that 10-fold cross-validation is likely to have a high variance (as well as a higher bias) if you only have a limited amount of data, as the size of the training set will be smaller than for LOOCV. So k-fold cross-validation can have variance issues as well, but for a different reason. This is why LOOCV is often better when the size of the dataset is small. However, the main reason for using LOOCV in my opinion is that it is computationally inexpensive for some models (such as linear regression, most kernel methods, nearest-neighbour classifiers, etc.), and unless the dataset were very small, I would use 10-fold cross-validation if it fitted in my computational budget, or better still, bootstrap estimation and bagging. | {
"source": [
"https://stats.stackexchange.com/questions/154830",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/72747/"
]
} |
154,879 | What are common cost functions used in evaluating the performance of neural networks? Details (feel free to skip the rest of this question, my intent here is simply to provide clarification on notation that answers may use to help them be more understandable to the general reader) I think it would be useful to have a list of common cost functions, alongside a few ways that they have been used in practice. So if others are interested in this I think a community wiki is probably the best approach, or we can take it down if it's off topic. Notation So to start, I'd like to define a notation that we all use when describing these, so the answers fit well with each other. This notation is from Neilsen's book . A Feedforward Neural Network is a many layers of neurons connected together. Then it takes in an input, that input "trickles" through the network and then the neural network returns an output vector. More formally, call $a^i_j$ the activation (aka output) of the $j^{th}$ neuron in the $i^{th}$ layer, where $a^1_j$ is the $j^{th}$ element in the input vector. Then we can relate the next layer's input to it's previous via the following relation: $a^i_j = \sigma(\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j)$ where $\sigma$ is the activation function, $w^i_{jk}$ is the weight from the $k^{th}$ neuron in the $(i-1)^{th}$ layer to the $j^{th}$ neuron in the $i^{th}$ layer, $b^i_j$ is the bias of the $j^{th}$ neuron in the $i^{th}$ layer, and $a^i_j$ represents the activation value of the $j^{th}$ neuron in the $i^{th}$ layer. Sometimes we write $z^i_j$ to represent $\sum\limits_k (w^i_{jk} \cdot a^{i-1}_k) + b^i_j$ , in other words, the activation value of a neuron before applying the activation function. For more concise notation we can write $a^i = \sigma(w^i \times a^{i-1} + b^i)$ To use this formula to compute the output of a feedforward network for some input $I \in \mathbb{R}^n$ , set $a^1 = I$ , then compute $a^2$ , $a^3$ , ..., $a^m$ , where m is the number of layers. Introduction A cost function is a measure of "how good" a neural network did with respect to it's given training sample and the expected output. It also may depend on variables such as weights and biases. A cost function is a single value, not a vector, because it rates how good the neural network did as a whole. Specifically, a cost function is of the form $$C(W, B, S^r, E^r)$$ where $W$ is our neural network's weights, $B$ is our neural network's biases, $S^r$ is the input of a single training sample, and $E^r$ is the desired output of that training sample. Note this function can also potentially be dependent on $y^i_j$ and $z^i_j$ for any neuron $j$ in layer $i$ , because those values are dependent on $W$ , $B$ , and $S^r$ . In backpropagation, the cost function is used to compute the error of our output layer, $\delta^L$ , via $$\delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma^{ \prime}(z^i_j)$$ . Which can also be written as a vector via $$\delta^L = \nabla_a C \odot \sigma^{ \prime}(z^i)$$ . We will provide the gradient of the cost functions in terms of the second equation, but if one wants to prove these results themselves, using the first equation is recommended because it's easier to work with. Cost function requirements To be used in backpropagation, a cost function must satisfy two properties: 1: The cost function $C$ must be able to be written as an average $$C=\frac{1}{n} \sum\limits_x C_x$$ over cost functions $C_x$ for individual training examples, $x$ . This is so it allows us to compute the gradient (with respect to weights and biases) for a single training example, and run Gradient Descent. 2: The cost function $C$ must not be dependent on any activation values of a neural network besides the output values $a^L$ . Technically a cost function can be dependent on any $a^i_j$ or $z^i_j$ . We just make this restriction so we can backpropagte, because the equation for finding the gradient of the last layer is the only one that is dependent on the cost function (the rest are dependent on the next layer). If the cost function is dependent on other activation layers besides the output one, backpropagation will be invalid because the idea of "trickling backwards" no longer works. Also, activation functions are required to have an output $0\leq a^L_j \leq 1$ for all $j$ . Thus these cost functions need to only be defined within that range (for example, $\sqrt{a^L_j}$ is valid since we are guaranteed $a^L_j \geq 0$ ). | Here are those I understand so far. Most of these work best when given values between 0 and 1. Quadratic cost Also known as mean squared error , this is defined as: $$C_{MST}(W, B, S^r, E^r) = 0.5\sum\limits_j (a^L_j - E^r_j)^2$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C_{MST} = (a^L - E^r)$$ Cross-entropy cost Also known as Bernoulli negative log-likelihood and Binary Cross-Entropy $$C_{CE}(W, B, S^r, E^r) = -\sum\limits_j [E^r_j \text{ ln } a^L_j + (1 - E^r_j) \text{ ln }(1-a^L_j)]$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C_{CE} = \frac{(a^L - E^r)}{(1-a^L)(a^L)}$$ Exponentional cost This requires choosing some parameter $\tau$ that you think will give you the behavior you want. Typically you'll just need to play with this until things work good. $$C_{EXP}(W, B, S^r, E^r) = \tau\text{ }\exp(\frac{1}{\tau} \sum\limits_j (a^L_j - E^r_j)^2)$$ where $\text{exp}(x)$ is simply shorthand for $e^x$ . The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{2}{\tau}(a^L- E^r)C_{EXP}(W, B, S^r, E^r)$$ I could rewrite out $C_{EXP}$ , but that seems redundant. Point is the gradient computes a vector and then multiplies it by $C_{EXP}$ . Hellinger distance $$C_{HD}(W, B, S^r, E^r) = \frac{1}{\sqrt{2}}\sum\limits_j(\sqrt{a^L_j}-\sqrt{E^r_j})^2$$ You can find more about this here . This needs to have positive values, and ideally values between $0$ and $1$ . The same is true for the following divergences. The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{\sqrt{a^L}-\sqrt{E^r}}{\sqrt{2}\sqrt{a^L}}$$ Kullback–Leibler divergence Also known as Information Divergence , Information Gain , Relative entropy , KLIC , or KL Divergence (See here ). Kullback–Leibler divergence is typically denoted $$D_{\mathrm{KL}}(P\|Q) = \sum_i P(i) \, \ln\frac{P(i)}{Q(i)}$$ , where $D_{\mathrm{KL}}(P\|Q)$ is a measure of the information lost when $Q$ is used to approximate $P$ . Thus we want to set $P=E^i$ and $Q=a^L$ , because we want to measure how much information is lost when we use $a^i_j$ to approximate $E^i_j$ . This gives us $$C_{KL}(W, B, S^r, E^r)=\sum\limits_jE^r_j \log \frac{E^r_j}{a^L_j}$$ The other divergences here use this same idea of setting $P=E^i$ and $Q=a^L$ . The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = -\frac{E^r}{a^L}$$ Generalized Kullback–Leibler divergence From here . $$C_{GKL}(W, B, S^r, E^r)=\sum\limits_j E^r_j \log \frac{E^r_j}{a^L_j} -\sum\limits_j(E^r_j) + \sum\limits_j(a^L_j)$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{a^L-E^r}{a^L}$$ Itakura–Saito distance Also from here . $$C_{GKL}(W, B, S^r, E^r)= \sum_j \left(\frac {E^r_j}{a^L_j} - \log \frac{E^r_j}{a^L_j} - 1 \right)$$ The gradient of this cost function with respect to the output of a neural network and some sample $r$ is: $$\nabla_a C = \frac{a^L-E^r}{\left(a^L\right)^2}$$ Where $\left(\left(a^L\right)^2\right)_j = a^L_j \cdot a^L_j$ . In other words, $\left( a^L\right) ^2$ is simply equal to squaring each element of $a^L$ . | {
"source": [
"https://stats.stackexchange.com/questions/154879",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/78563/"
]
} |
154,951 | Mostly theoretical question. Are there any examples of non-normal distributions that has first four moment equal to those of normal? Could they exist in theory? | Yes, examples with skewness and excess kurtosis both zero are relatively easy to construct. (Indeed examples (a) to (d) below also have Pearson mean-median skewness 0) (a) For example, in this answer an example is given by taking a 50-50 mixture of a gamma variate, (which I call $X$ ), and the negative of a second one, which has a density that looks like this: Clearly the result is symmetric and not normal. The scale parameter is unimportant here, so we can make it 1. Careful choice of the shape parameter of the gamma yields the required kurtosis: The variance of this double-gamma ( $Y$ ) is easy to work out in terms of the gamma variate it's based on: $\text{Var}(Y)=E(X^2)=\text{Var}(X)+E(X)^2=\alpha+\alpha^2$ . The fourth central moment of the variable $Y$ is the same as $E(X^4)$ , which for a gamma( $\alpha$ ) is $\alpha(\alpha+1)(\alpha+2)(\alpha+3)$ As a result the kurtosis is $\frac{\alpha(\alpha+1)(\alpha+2)(\alpha+3)}{\alpha^2(\alpha+1)^2}=\frac{(\alpha+2)(\alpha+3)}{\alpha(\alpha+1)}$ . This is $3$ when $(\alpha+2)(\alpha+3)=3\alpha(\alpha+1)$ , which happens when $\alpha=(\sqrt{13}+1)/2\approx 2.303$ . (b) We could also create an example as a scale mixture of two uniforms. Let $U_1\sim U(-1,1)$ and let $U_2\sim U(-a,a)$ , and let $M=\frac12 U_1+\frac12 U_2$ . Clearly by considering that $M$ is symmetric and has finite range, we must have $E(M)=0$ ; the skewness will also be 0 and central moments and raw moments will be the same. $\text{Var}(M)=E(M^2)=\frac12\text{Var}(U1)+\frac12\text{Var}(U_2)=\frac16[1+a^2]$ . Similarly, $E(M^4)=\frac{1}{10} (1+a^4)$ and so
the kurtosis is $\frac{\frac{1}{10} (1+a^4)}{[\frac16 (1+a^2)]^2}=3.6\frac{1+a^4}{(1+a^2)^2}$ If we choose $a=\sqrt{5+\sqrt{24}}\approx 3.1463$ , then kurtosis is 3, and the density looks like this: (c) here's a fun example. Let $X_i\stackrel{_\text{iid}}{\sim}\text{Pois}(\lambda)$ , for $i=1,2$ . Let $Y$ be a 50-50 mixture of $\sqrt{X_1}$ and $-\sqrt{X_2}$ : by symmetry $E(Y)=0$ (we also need $E(|Y|)$ to be finite but given $E(X_1)$ is finite, we have that) $Var(Y)=E(Y^2)=E(X_1)=\lambda$ by symmetry (and the fact that the absolute 3rd moment exists) skew=0 4th moment: $E(Y^4) = E(X_1^2) = \lambda+\lambda^2$ kurtosis = $\frac{\lambda+\lambda^2}{\lambda^2}= 1+1/\lambda$ so when $\lambda=\frac12$ , kurtosis is 3. This is the case illustrated above. (d) all my examples so far have been symmetric, since symmetric answers are easier to create -- but asymmetric solutions are also possible. Here's a discrete example. (e) Now, here's an asymmetric continuous family . It will perhaps be the most surprising for some readers, so I'll describe it in detail. I'll begin by describing a discrete example and then build a continuous example from it (indeed I could have started with the one in (d), and it would have been simpler to play with, but I didn't, so we also have another discrete example for free). $\:\,$ (i) At $x=-2,1$ and $m=\frac12 (5+\sqrt{33})$ ( $\approx 5.3723$ ) place probabilities of $p_{-2}= \frac{1}{36}(7+\sqrt{33})$ , $p_1=\frac{1}{36}(17+\sqrt{33})$ , and $p_m=\frac{1}{36}(12-2\sqrt{33})$ (approximately 35.402%, 63.179% and 1.419%), respectively. This asymmetric three-point discrete distribution has zero skewness and zero excess kurtosis (as with all the above examples, it also has mean zero, which simplifies the calculations). $\:$ (ii) Now, let's make a continuous mixture. Centered at each of the ordinates above (-2,1,m), place a Gaussian kernel with common standard deviation $\sigma$ , and probability-weight given by the probabilities above (i.e. $w=(p_{-2},p_1,p_m)$ ). Phrased another way, take a mixture of three Gaussians with means at $-2,1$ and $m$ each with standard deviation $\sigma$ in the proportions $(p_{-2},p_1,p_m)$ respectively. For any choice of $\sigma$ the resulting continuous distribution has skewness 0 and excess kurtosis 0. Here's one example (here the common $\sigma$ for the normal components is 1.25): (The marks below the density show the locations of the centers of the Gaussian components.) As you see, none of these examples look particularly "normal". It would be a simple matter to make any number of discrete, continuous or mixed variables with the same properties. While most of my examples were constructed as mixtures, there's nothing special about mixtures, other than they're often a convenient way to make distributions with properties the way you want, a bit like building things with Lego. This answer gives some additional details on kurtosis that should make some of the considerations involved in constructing other examples a little clearer. You could match more moments in similar fashion, though it requires more effort to do so. However, because the MGF of the normal exists, you can't match all integer moments of a normal with some non-normal distribution, since that would mean their MGFs match, implying the second distribution was normal as well. | {
"source": [
"https://stats.stackexchange.com/questions/154951",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/76769/"
]
} |
155,580 | I'm a bit confused with a lecture on linear regression given by Andrew Ng on Coursera about machine learning. There, he gave a cost function that minimises the sum-of-squares as: $$ \frac{1}{2m} \sum _{i=1}^m \left(h_\theta(X^{(i)})-Y^{(i)}\right)^2 $$ I understand where the $\frac{1}{2}$ comes from. I think he did it so that when he performed derivative on the square term, the 2 in the square term would cancel with the half. But I don't understand where the $\frac{1}{m}$ come from. Why do we need to do $\frac{1}{m}$? In the standard linear regression, we don't have it, we simply minimise the residuals. Why do we need it here? | As you seem to realize, we certainly don't need the $1/m$ factor to get linear regression. The minimizers will of course be exactly the same, with or without it. One typical reason to normalize by $m$ is so that we can view the cost function as an approximation to the "generalization error", which is the expected square loss on a randomly chosen new example (not in the training set): Suppose $(X,Y),(X^{(1)},Y^{(1)}),\ldots,(X^{(m)},Y^{(m)})$ are sampled i.i.d. from some distribution. Then for large $m$ we expect that
$$
\frac{1}{m} \sum _{i=1}^m \left(h_\theta(X^{(i)})-Y^{(i)}\right)^2 \approx \mathbb{E}\left(h_\theta(X)-Y\right)^2.
$$ More precisely, by the Strong Law of Large Numbers, we have
$$
\lim_{m\to\infty} \frac{1}{m} \sum _{i=1}^m \left(h_\theta(X^{(i)})-Y^{(i)}\right)^2 = \mathbb{E}\left(h_\theta(X)-Y\right)^2
$$
with probability 1. Note: Each of the statements above are for any particular $\theta$, chosen without looking at the training set. For machine learning, we want these statements to hold for some $\hat{\theta}$ chosen based on its good performance on the training set. These claims can still hold in this case, though we need to make some assumptions on the set of functions $\{h_\theta \,|\, \theta \in \Theta\}$, and we'll need something stronger than the Law of Large Numbers. | {
"source": [
"https://stats.stackexchange.com/questions/155580",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/34623/"
]
} |
155,817 | Lets say I have three independent sources and each of them make predictions for the weather tomorrow. The first one says that the probability of rain tomorrow is 0, then the second one says that the probability is 1, and finally the last one says that the probability is 50%. I would like to know the total probability given that information. If apply the multiplication theorem for independent events I get 0, which doesn't seem correct. Why is not possible to multiply all three if all sources are independent? Is there some Bayesian way to update the prior as I get new information? Note: This is not homework, is something that I was thinking about. | You ask about three things: (a) how to combine several forecasts to get single forecast, (b) if Bayesian approach can be used in here, and (c) how to deal with zero-probabilities. Combining forecasts, is a common practice . If you have several forecasts than if you take average of those forecasts the resulting combined forecast should be better in terms of accuracy than any of the individual forecasts. To average them you could use weighted average where weights are based on inverse errors (i.e. precision), or information content . If you had knowledge on reliability of each source you could assign weights that are proportional to reliability of each source, so more reliable sources have greater impact on the final combined forecast. In your case you do not have any knowledge about their reliability so each of the forecasts have the same weight and so you can use simple arithmetic mean of the three forecasts $$ 0\%\times.33+50\%\times.33+100\%\times.33 = (0\%+50\%+100\%)/3=50\% $$ As was suggested in comments by @AndyW and @ArthurB. , other methods besides simple weighted mean are available. Many such methods are described in literature about averaging expert forecasts, that I was not familiar with before, so thanks guys. In averaging expert forecasts sometimes we want to correct for the fact that experts tend to regress to the mean (Baron et al, 2013), or make their forecasts more extreme (Ariely et al, 2000; Erev et al, 1994). To achieve this one could use transformations of individual forecasts $p_i$, e.g. logit function $$ \mathrm{logit}(p_i) = \log\left( \frac{p_i}{1-p_i} \right) \tag{1} $$ odds to the $a$-th power $$ g(p_i) = \left( \frac{p_i}{1-p_i} \right)^a \tag{2} $$ where $0 < a < 1$, or more general transformation of form $$ t(p_i) = \frac{p_i^a}{p_i^a + (1-p_i)^a} \tag{3} $$ where if $a=1$ no transformation is applied, if $a>1$ individual forecasts are made more extreme, if $0 < a<1$ forecasts are made less extreme, what is shown on picture below (see Karmarkar, 1978; Baron et al, 2013). After such transformation forecasts are averaged (using arithmetic mean, median, weighted mean, or other method). If equations (1) or (2) were used results need to be back-transformed using inverse logit for (1) and inverse odds for (2). Alternatively, geometric mean can be used (see Genest and Zidek, 1986; cf. Dietrich and List, 2014) $$ \hat p = \frac{ \prod_{i=1}^N p_i^{w_i} }{ \prod_{i=1}^N p_i^{w_i} + \prod_{i=1}^N (1 - p_i)^{w_i} } \tag{4}$$ or approach proposed by Satopää et al (2014) $$ \hat p = \frac{ \left[ \prod_{i=1}^N \left(\frac{p_i}{1-p_i} \right)^{w_i} \right]^a }{ 1 + \left[ \prod_{i=1}^N \left(\frac{p_i}{1-p_i} \right)^{w_i} \right]^a } \tag{5}$$ where $w_i$ are weights. In most cases equal weights $w_i = 1/N$ are used unless a priori information that suggests other choice exists. Such methods are used in averaging expert forecasts so to correct for under- or overconfidence. In other cases you should consider if transforming forecasts to more, or less extreme is justified since it can make resulting aggregate estimate fall out of the boundaries marked by the lowest and the greatest individual forecast. If you have a priori knowledge about rain probability you can apply Bayes theorem to update the forecasts given the a priori probability of rain in similar fashion as described in here . There is also a simple approach that could be applied, i.e. calculate weighted average of your $p_i$ forecasts (as described above) where prior probability $\pi$ is treated as additional data point with some prespecified weight $w_{\pi}$ as in this IMDB example (see also source , or here and here for discussion; cf. Genest and Schervish, 1985), i.e. $$ \hat p = \frac{ \left(\sum_{i=1}^N p_i w_i \right) + \pi w_{\pi} }{ \left(\sum_{i=1}^N w_i \right) + w_{\pi} } \tag{6}$$ From your question however it does not follow that you have any a priori knowledge about your problem so you would probably use uniform prior, i.e. assume a priori $50\%$ chance of rain and this does not really change much in case of example that you provided. For dealing with zeros, there are several different approaches possible. First you should notice that $0\%$ chance of rain is not really reliable value, since it says that it is impossible that it will rain. Similar problems often occur in natural language processing when in your data you do not observe some values that possibly can occur (e.g. you count frequencies of letters and in your data some uncommon letter does not occur at all). In this case the classical estimator for probability, i.e. $$ p_i = \frac{n_i}{\sum_i n_i} $$ where $n_i$ is a number of occurrences of $i$th value (out of $d$ categories), gives you $p_i = 0$ if $n_i = 0$. This is called zero-frequency problem . For such values you know that their probability is nonzero (they exist!), so this estimate is obviously incorrect. There is also a practical concern: multiplying and dividing by zeros leads to zeros or undefined results, so zeros are problematic in dealing with. The easy and commonly applied fix is, to add some constant $\beta$ to your counts, so that $$ p_i = \frac{n_i + \beta}{(\sum_i n_i) + d\beta} $$ The common choice for $\beta$ is $1$, i.e. applying uniform prior based on Laplace's rule of succession , $1/2$ for Krichevsky-Trofimov estimate, or $1/d$ for Schurmann-Grassberger (1996) estimator. Notice however that what you do here is you apply out-of-data (prior) information in your model, so it gets subjective, Bayesian flavor. With using this approach you have to remember of assumptions you made and take them into consideration. The fact that we have strong a priori knowledge that there should not be any zero probabilities in our data directly justifies the Bayesian approach in here. In your case you do not have frequencies but probabilities, so you would be adding some very small value so to correct for zeros. Notice however that in some cases this approach may have bad consequences (e.g. when dealing with logs ) so it should be used with caution. Schurmann, T., and P. Grassberger. (1996). Entropy estimation of symbol sequences. Chaos, 6, 41-427. Ariely, D., Tung Au, W., Bender, R.H., Budescu, D.V., Dietz, C.B., Gu, H., Wallsten, T.S. and Zauberman, G. (2000). The effects of averaging subjective probability estimates between and within judges. Journal of Experimental Psychology: Applied, 6 (2), 130. Baron, J., Mellers, B.A., Tetlock, P.E., Stone, E. and Ungar, L.H. (2014). Two reasons to make aggregated probability forecasts more extreme. Decision Analysis, 11(2), 133-145. Erev, I., Wallsten, T.S., and Budescu, D.V. (1994). Simultaneous over-and underconfidence: The role of error in judgment processes. Psychological review, 101 (3), 519. Karmarkar, U.S. (1978). Subjectively weighted utility: A descriptive extension of the expected utility model. Organizational behavior and human performance, 21 (1), 61-72. Turner, B.M., Steyvers, M., Merkle, E.C., Budescu, D.V., and Wallsten, T.S. (2014). Forecast aggregation via recalibration. Machine learning, 95 (3), 261-289. Genest, C., and Zidek, J. V. (1986). Combining probability distributions: a
critique and an annotated bibliography. Statistical Science, 1 , 114–135. Satopää, V.A., Baron, J., Foster, D.P., Mellers, B.A., Tetlock, P.E., and Ungar, L.H. (2014). Combining multiple probability predictions using a simple logit model. International Journal of Forecasting, 30 (2), 344-356. Genest, C., and Schervish, M. J. (1985). Modeling expert judgments for Bayesian updating. The Annals of Statistics , 1198-1212. Dietrich, F., and List, C. (2014). Probabilistic Opinion Pooling. (Unpublished) | {
"source": [
"https://stats.stackexchange.com/questions/155817",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/79131/"
]
} |
156,471 | In ImageNet classification papers top-1 and top-5 error rates are important units for measuring the success of some solutions, but what are those error rates? In ImageNet Classification with Deep Convolutional
Neural Networks by Krizhevsky et al. every solution based on one single CNN (page 7) has no top-5 error rates while the ones with 5 and 7 CNNs have (and also the error rate for 7 CNNs are better than for 5 CNNs). Does this mean top-1 error rate is the best single error rate for one single CNN? Is the top-5 error rate simply the accumulated error rate of five CNNs? | [...] where the top-5 error rate is the fraction of test images for which
the correct label is not among the five labels considered most
probable by the mode. First, you make a prediction using the CNN and obtain the predicted class multinomial distribution ( $\sum p_{class} = 1$ ). Now, in the case of the top-1 score, you check if the top class (the one with the highest probability) is the same as the target label. In the case of the top-5 score, you check if the target label is one of your top 5 predictions (the 5 ones with the highest probabilities). In both cases, the top score is computed as the number of times a predicted label matched the target label, divided by the number of data points evaluated. Finally, when 5-CNNs are used, you first average their predictions and follow the same procedure for calculating the top-1 and top-5 scores. | {
"source": [
"https://stats.stackexchange.com/questions/156471",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/78553/"
]
} |
156,778 | What is the difference between the three terms below? percentile quantile quartile | 0 quartile = 0 quantile = 0 percentile 1 quartile = 0.25 quantile = 25 percentile 2 quartile = .5 quantile = 50 percentile (median) 3 quartile = .75 quantile = 75 percentile 4 quartile = 1 quantile = 100 percentile | {
"source": [
"https://stats.stackexchange.com/questions/156778",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/12492/"
]
} |
157,012 | Is Average Precision (AP) the Area under Precision-Recall Curve (AUC of PR-curve) ? EDIT: here is some comment about difference in PR AUC and AP. The AUC is obtained by trapezoidal interpolation of the precision. An
alternative and usually almost equivalent metric is the Average
Precision (AP), returned as info.ap. This is the average of the
precision obtained every time a new positive sample is recalled. It is
the same as the AUC if precision is interpolated by constant segments
and is the definition used by TREC most often. http://www.vlfeat.org/overview/plots-rank.html Moreover, the auc and the average_precision_score results are not the same in scikit-learn. This is strange, because in the documentation we have: Compute average precision (AP) from prediction scores This score
corresponds to the area under the precision-recall curve. here is the code: # Compute Precision-Recall and plot curve
precision, recall, thresholds = precision_recall_curve(y_test, clf.predict_proba(X_test)[:,1])
area = auc(recall, precision)
print "Area Under PR Curve(AP): %0.2f" % area #should be same as AP?
print 'AP', average_precision_score(y_test, y_pred, average='weighted')
print 'AP', average_precision_score(y_test, y_pred, average='macro')
print 'AP', average_precision_score(y_test, y_pred, average='micro')
print 'AP', average_precision_score(y_test, y_pred, average='samples') for my classifer I have something like: Area Under PR Curve(AP): 0.65
AP 0.676101781304
AP 0.676101781304
AP 0.676101781304
AP 0.676101781304 | Short answer is: YES . Average Precision is a single number used to summarise a Precision-Recall curve: You can approximate the integral (area under the curve) with: Please take a look at this link for a good explanation. | {
"source": [
"https://stats.stackexchange.com/questions/157012",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/16843/"
]
} |
157,860 | The variance is said to be a measure of spread. So, I had thought that the variance of 3,5 is equal to the variance of 3,3,5,5 since the numbers are equally spread. But this is not the case, the variance of 3,5 is 2 while the variance of 3,3,5,5 is 1 1/3 . This puzzles me, given the explanation that variance is supposed to be a measure of spread. So, in that context, what does measure of spread mean? | If you define variance as $s^2_{n}=$ $\,\text{MSE}\,$ $=\frac1n \sum_{i=1}^n (x_i-\bar{x})^2$ -- similar to population variance but with sample mean for $\mu$ , then both your samples would have the same variance. So the difference is purely because of Bessel's correction in the usual formula for the sample variance ( $s^2_{n-1}=\frac{n}{n-1}\cdot \text{MSE}=\frac{n}{n-1}\cdot \frac1n \sum_{i=1}^n (x_i-\bar{x})^2=\frac{1}{n-1}\sum_{i=1}^n (x_i-\bar{x})^2$ , which adjusts for the fact that the sample mean is closer to the data than the population mean is, in order to make it unbiased (taking the right value "on average"). The effect gradually goes away with increasing sample size, as $\frac{n-1}{n}$ goes to 1 as $n\to\infty$ . There's no particular reason you have to use the unbiased estimator for variance, by the way -- $s^2_n$ is a perfectly valid estimator, and in some cases may arguably have advantages over the more common form (unbiasedness isn't necessarily that big a deal). Variance itself isn't directly a measure of spread. If I double all the values in my data set, I contend they're twice as "spread". But variance increases by a factor of 4. So more usually, it is said that standard deviation, rather than variance is a measure of spread. Of course, the same issue occurs with standard deviation (the usual $s_{n-1}$ version) as with variance -- when you double up the points the standard deviation changes, for the same reason as happens with the variance. In small samples the Bessel correction makes standard deviation somewhat less intuitive as a measure of spread because of that effect (that duplicating the sample changes the value). But many measures of spread do retain the the same value when duplicating the sample; I'll mention a few -- $s_n$ (of course) the mean (absolute) deviation from the mean the median (absolute) deviation from the median the interquartile range (at least for some definitions of sample quartiles) | {
"source": [
"https://stats.stackexchange.com/questions/157860",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/52841/"
]
} |
158,583 | I do not understand exactly what is meant by node size. I know what a decision node is, but not what node size is. | A decision tree works by recursive partition of the training set. Every node $t$ of a decision tree is associated with a set of $n_t$ data points from the training set: You might find the parameter nodesize in some random forests packages, e.g. R : This is the minimum node size , in the example above the minimum node size is 10. This parameter implicitly sets the depth of your trees. nodesize from R random forest package Minimum size of terminal nodes. Setting this number larger causes smaller trees to be grown (and thus take less time). Note that the default values are different for classification (1) and regression (5). In other packages you directly find the parameter depth , e.g. WEKA : -depth from WEKA random forest package The maximum depth of the trees, 0 for unlimited.
(default 0) | {
"source": [
"https://stats.stackexchange.com/questions/158583",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/45129/"
]
} |
158,631 | If I understood correctly, in a machine learning algorithm, the model has to learn from its experience, i.e when the model gives the wrong prediction for the new cases, it must adapt to the new observations, and in time, the model becomes increasingly better.
I don't see that the logistic regression has this characteristic. So why is it still regarded as a machine learning algorithm? What is the difference between logistic regression with the normal regression in term of "learning"? I have the same question for random forests! And what is the definition of "machine learning"? | Machine Learning is not a well defined term. In fact, if you Google "Machine Learning Definition" the first two things you get are quite different. From WhatIs.com , Machine learning is a type of artificial intelligence (AI) that
provides computers with the ability to learn without being explicitly
programmed. Machine learning focuses on the development of computer
programs that can teach themselves to grow and change when exposed to
new data. From Wikipedia , Machine learning explores the construction and study of algorithms
that can learn from and make predictions on data. Logistic regression undoubtedly fits the Wikipedia definition and you could argue whether or not it fits the WhatIs defintion. I personally define Machine Learning just as Wikipedia does and consider it a subset of statistics. | {
"source": [
"https://stats.stackexchange.com/questions/158631",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/78313/"
]
} |
158,663 | If I have 39% of students at a school that exhibit a specific, objective, measurable behavior, can I extrapolate this and say that any student at that school has a 39% chance of exhibiting that behavior? | Machine Learning is not a well defined term. In fact, if you Google "Machine Learning Definition" the first two things you get are quite different. From WhatIs.com , Machine learning is a type of artificial intelligence (AI) that
provides computers with the ability to learn without being explicitly
programmed. Machine learning focuses on the development of computer
programs that can teach themselves to grow and change when exposed to
new data. From Wikipedia , Machine learning explores the construction and study of algorithms
that can learn from and make predictions on data. Logistic regression undoubtedly fits the Wikipedia definition and you could argue whether or not it fits the WhatIs defintion. I personally define Machine Learning just as Wikipedia does and consider it a subset of statistics. | {
"source": [
"https://stats.stackexchange.com/questions/158663",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/80696/"
]
} |
158,767 | I'm coming from Civil Engineering, in which we use Extreme Value Theory , like GEV distribution to predict the value of certain events, like The biggest wind speed , i.e the value that 98.5% of the wind speed would be lower to. My question is that why use such an extreme value distribution ? Wouldn't it be easier if we just used the overall distribution and get the value for the 98.5% probability ? | Disclaimer: At points in the following, this GROSSLY presumes that your data is normally distributed. If you are actually engineering anything then talk to a strong stats professional and let that person sign on the line saying what the level will be. Talk to five of them, or 25 of them. This answer is meant for a civil engineering student asking "why" not for an engineering professional asking "how". I think the question behind the question is "what is the extreme value distribution?". Yes it is some algebra - symbols. So what? right? Lets think about 1000 year floods. They are big. http://www.huffingtonpost.com/2013/09/20/1000-year-storm_n_3956897.html http://science.time.com/2013/09/17/the-science-behind-colorados-thousand-year-flood/ http://gizmodo.com/why-we-dont-design-our-cities-to-withstand-1-000-year-1325451888 When they happen, they are going to kill a lot of people. Lots of bridges are going down. You know what bridge isn't going down? I do. You don't ... yet. Question: Which bridge isn't going down in a 1000 year flood? Answer: The bridge designed to withstand it. The data you need to do it your way: So lets say you have 200 years of daily water data. Is the 1000 year flood in there? Not remotely. You have a sample of one tail of the distribution. You don't have the population. If you knew all of the history of floods then you would have the total population of data. Lets think about this. How many years of data do you need to have, how many samples, in order to have at least one value whose likelihood is 1 in 1000? In a perfect world, you would need at least 1000 samples. The real world is messy, so you need more. You start getting 50/50 odds at about 4000 samples. You start getting guaranteed to have more than 1 at around 20,000 samples. Sample doesn't mean "water one second vs. the next" but a measure for each unique source of variation - like year-to-year variation. One measure over one year, along with another measure over another year constitute two samples. If you don't have 4,000 years of good data then you likely don't have an example 1000 year flood in the data. The good thing is - you don't need that much data to get a good result. Here is how to get better results with less data: If you look at the annual maxima, you can fit the "extreme value distribution" to the 200 values of year-max-levels and you will have the distribution that contains the 1000 year flood-level. It will be the algebra, not the actual "how big is it". You can use the equation to determine how big the 1000 year flood will be. Then, given that volume of water - you can build your bridge to resist it. Don't shoot for the exact value, shoot for bigger, otherwise you are designing it to fail on the 1000 year flood. If you are bold, then you can use resampling to figure out how much beyond on the exact 1000 year value you need to build it to in order to have it resist. Here is why EV/GEV are the relevant analytic forms: The generalized extreme value distribution is about how much the max varies. The variation in the maximum behaves really different than variation in the mean. The normal distribution, via the central limit theorem, describes a lot of "central tendencies". Procedure: do the following 1000 times: i. pick 1000 numbers from the standard normal distribution ii. compute the max of that group of samples and store it now plot the distribution of the result #libraries
library(ggplot2)
#parameters and pre-declarations
nrolls <- 1000
ntimes <- 10000
store <- vector(length=ntimes)
#main loop
for (i in 1:ntimes){
#get samples
y <- rnorm(nrolls,mean=0,sd=1)
#store max
store[i] <- max(y)
}
#plot
ggplot(data=data.frame(store), aes(store)) +
geom_histogram(aes(y = ..density..),
col="red",
fill="green",
alpha = .2) +
geom_density(col=2) +
labs(title="Histogram for Max") +
labs(x="Max", y="Count") This is NOT the "standard normal distribution": The peak is at 3.2 but the max goes up toward 5.0. It has skew. It doesn't get below about 2.5. If you had actual data (the standard normal) and you just pick the tail, then you are uniformly randomly picking something along this curve. If you get lucky then you are toward the center and not the lower tail. Engineering is about the opposite of luck - it is about achieving consistently the desired results every time. " Random numbers are far too important to leave to chance " (see footnote), especially for an engineer. The analytic function family that best fits this data - the extreme value family of distributions. Sample fit: Let's say we have 200 random values of the year-maximum from the standard normal distribution, and we are going to pretend they are our 200 year history of maximum water levels (whatever that means). To get the distribution we would do the following: Sample the "store" variable (to make for short/easy code) fit to a generalized extreme value distribution find the mean of the distribution use bootstrapping to find the 95% CI upper limit in variation of the mean, so we can target our engineering for that. (code presumes the above have been run first) library(SpatialExtremes) #if it isn't here install it, it is the ev library
y2 <- sample(store,size=200,replace=FALSE) #this is our data
myfit <- gevmle(y2) This gives results: > gevmle(y2)
loc scale shape
3.0965530 0.2957722 -0.1139021 These can be plugged into the generating function to create 20,000 samples y3 <- rgev(20000,loc=myfit[1],scale=myfit[2],shape=myfit[3]) Building to the following will give 50/50 odds of failing on any year: mean(y3) 3.23681 Here is the code to determine what the 1000 year "flood" level is: p1000 <- qgev(1-(1/1000),loc=myfit[1],scale=myfit[2],shape=myfit[3])
p1000 Building to this following should give you 50/50 odds of failing on the 1000 year flood. p1000 4.510931 To determine the 95% upper CI I used the following code: myloc <- 3.0965530
myscale <- 0.2957722
myshape <- -0.1139021
N <- 1000
m <- 200
p_1000 <- vector(length=N)
yd <- vector(length=m)
for (i in 1:N){
#generate samples
yd <- rgev(m,loc=myloc,scale=myscale,shape=myshape)
#compute fit
fit_d <- gevmle(yd)
#compute quantile
p_1000[i] <- qgev(1-(1/1000),loc=fit_d[1],scale=fit_d[2],shape=fit_d[3])
}
mytarget <- quantile(p_1000,probs=0.95) The result was: > mytarget
95%
4.812148 This means, that in order to resist the large majority of 1000 year floods, given that your data is immaculately normal (not likely), you must build for the ... > out <- pgev(4.812148,loc=fit_d[1],scale=fit_d[2],shape=fit_d[3])
> 1/(1-out) or the > 1/(1-out)
shape
1077.829 ... 1078 year flood. Bottom lines: you have a sample of the data, not the actual total population. That
means your quantiles are estimates, and could be off. Distributions like the generalized extreme value distribution are
built to use the samples to determine the actual tails. They are
much less badly off at estimating than using the sample values, even
if you don't have enough samples for the classic approach. If you are robust the ceiling is high, but the result of that is -
you don't fail. Best of luck PS: I have heard that some civil engineering designs target the 98.5th percentile. If we had computed the 98.5th percentile instead of the max, then we would have found a different curve with different parameters. I think it is meant to build to a 67 year storm. $$ 1/(1-0.985) \approx 67 $$ The approach there, imo, would be to find the distribution for 67 year storms, then to determine variation around the mean, and get the padding so that it is engineered to succeed on the 67th year storm instead of to fail in it. Given the previous point, on average every 67 years the civil folks should have to rebuild. So at the full cost of engineering and construction every 67 years, given the operational life of the civil structure (I don't know what that is), at some point it might be less expensive to engineer for a longer inter-storm period. A sustainable civil infrastructure is one designed to last at least one human lifespan without failure, right? PS: more fun - a youtube video (not mine) https://www.youtube.com/watch?v=EACkiMRT0pc Footnote:
Coveyou, Robert R. "Random number generation is too important to be left to chance." Applied Probability and Monte Carlo Methods and modern aspects of dynamics. Studies in applied mathematics 3 (1969): 70-111. | {
"source": [
"https://stats.stackexchange.com/questions/158767",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/80763/"
]
} |
159,539 | Introductions to graphical models describe them as "... a marriage between graph theory and probability theory." I get the probability theory part but I have trouble understanding where exactly graph theory fits in. What insights from graph theory have helped deepen our understanding of probability distributions and decision making under uncertainty? I am looking for concrete examples, beyond the obvious use of graph theoretic terminology in PGMs, such as classifying a PGM as a "tree" or "bipartite" or "undirected", etc. | There is very little true mathematical graph theory in probabilistic graphical models, where by true mathematical graph theory I mean proofs about cliques, vertex orders, max-flow min-cut theorems, and so on. Even something as fundamental as Euler's Theorem and Handshaking Lemma are not used, though I suppose one might invoke them to check some property of computer code used to update probabilistic estimates. Moreover, probabilist graphical models rarely use more than a subset of the classes of graphs, such as multi-graphs. Theorems about flows in graphs are not used in probabilistic graphical models. If student A were an expert in probability but knew nothing about graph theory, and student B were an expert in graph theory but knew nothing about probability, then A would certainly learn and understand probabilistic graphical models faster than would B. | {
"source": [
"https://stats.stackexchange.com/questions/159539",
"https://stats.stackexchange.com",
"https://stats.stackexchange.com/users/54725/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.