idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,801
Generate a random variable with a defined correlation to an existing variable(s)
Let $X$ be your fixed variable and you want to generate $Y$ variable that correlates with $X$ by amount $r$. If $X$ is standardized then (because $r$ is beta coefficient in simple regression) $Y= rX+E$, where $E$ is random variable from normal distribution having mean $0$ and $\text{sd}=\sqrt{1-r^2}$. Observed correlation between $X$ and $Y$ data will be approximately $r$; $X$ and $Y$ can be seen as random samples from bivariate normal population (if $X$ is from normal) with $\rho=r$. Now, if you want to attain the correlation in your bivariate sample exactly $r$, you need to provide that $E$ has zero correlation with $X$. This tightening it to zero can be reached by modifying $E$ iteratively. Well, with only two variables, one given ($X$) and one to generate ($Y$), the sufficient number of iterations is actually 1, but with multiple given variables ($X_1, X_2, X_3,...$) iterations will be needed. It should be noted that if $X$ is normal then in the first procedure ("approximate $r$") $Y$ will also be normal; however, in iterative fitting of $Y$ to the "exact $r$" $Y$ is likely to lose normality because the fitting exploits case values selectively. Update Nov 11, 2017. I've come across this old thread today and decided to expand my answer by showing the algorithm of the iterative fitting about which I was speaking initially. Here is an iterative solution how to train a randomly simulated or preexistent variable $Y$ to correlate or covariate precisely as we desire (or very close to so - depending number of iterations) with a set of given variables $X$s (these cannot be modified). Disclamer: This iterative solution I've found inferior to the excellent one based on finding the dual basis and proposed by @whuber in this thread today. @whuber's solution is not iterative and, more importantly for me, it seems to be affecting the values of the input "pig" variable somewhat less than "my" algorithm (it'd be an asset then if the task is to "correct" the existing variable and not to generate random variate from scratch). Still, I'm publishing mine for curiosity and because it works (see also Footnote). So, we have given (fixed) variables $X_1, X_2,...,X_m$, and varible $Y$ which is either just randomly generated "pig" of values or is an existent data variable which values we need to "correct" - to bring $Y$ exactly to correlations (or it can be covariances) $r_1, r_2,...,r_m$ with the $X$s. All data must be continuous; in other words, there should be a good deal of unique values. The idea: perform iterative fitting of residuals. Knowing the wanted (target) correlations/covariances, we may compute predicted values for the $Y$ using the $X$s as multiple linear predictors. After obtaining the initial residuals (from the current $Y$ and the ideal prediction), train them iteratively not to correlate with the predictors. In the end, regain $Y$ with the residuals. (The procedure was my own experimental invention of the wheel many years ago when I knew none of the theory; I coded it then in SPSS.) Convert the target $r$s to sums-of-crossproducts by multiplying them by $\text{df}=n-1$: $S_j=r_j \text{df}$. ($j$ is a $X$ variable index.) Z-standardize all the variables (center each, then divide by the st. deviation computed on that above $\text{df}$). $Y$ and $X$s are thus standard. Observed sums of squares are now = $\text{df}$. Compute regressional coefficients predicting $Y$ by $X$s according to the target $r$s: $\bf b=(X'X)^{-1} S$. Compute predicted values for $Y$: $\hat{Y}=\bf Xb$. Compute residuals $E=Y-\hat{Y}$. Compute the needed (target) sum of squares for residuals: $SS_S=\text{df}-SS_{\hat {Y}}$. (Begin to iterate.) Compute observed sums of crossproducts between current $E$ and every $X_j$: $C_j= \sum_{i=1}^n E_i X_{ij}$ Correct values of $E$ in the aim to bring all $C$s closer to $0$ ($i$ is a case index): $$E_i[\text{corrected}]=E_i-\frac{\sum_{j=1}^m C_j X_{ij}} {n\sum_{j=1}^m X_{ij}^2}$$ (the denominator doesn't change on iterations, compute it in advance) Or, alternatively, a more efficient formula additionally insures the mean of $E$ becomes $0$. First, do center $E$ at each iteration prior computation of the $C$s at step 7, then on this step 8 correct as: $$E_i[\text{corrected}]=E_i-\frac{\sum_{j=1}^m \frac{C_j X_{ij}^3}{\sum_{i=1}^n X_{ij}^2}} {\sum_{j=1}^m X_{ij}^2}$$ (again, denominators are known in advance)$^1$ Bring $SS_E$ to its target value: $E_i[\text{corrected}]=E_i \sqrt{SS_S/SS_E}$ Go to step 7. (Do, say, 10-20 iterations; the greater is $m$ the more iterations could be needed. If target $r$s were realistic, $SS_S$ is positive, and if sample size $n$ isn't too few, iterations always direct to convergence. End iterating.) Ready: All the $C$s are almost zero now which means the residuals $E$ has been trained to restore target $r$s. Compute the fitting $Y$: $Y[\text{corrected}]=\hat{Y}+E$. The obtained $Y$ is almost standardized. As a last stroke, you may want to standardize it precisely, again like you did it on step 2. You may supply $Y$ with any variance and mean you like. Actually, among the four statistics - min, max, mean, st. dev. - you may select any two values and linearly transform the variable so it posesses them without altering the $r$s (correlations) you've attained (it is all called linear rescaling). To warn again what was said above. With that pulling of $Y$ exactly to the $r$, the output $Y$ does not have to be normally distributed. $^1$ The correction formula can be further sophisticated, for example, to insure greater homoscedasticity (in terms of sums-of-squares) of $Y$ with every $X$ as well, simultaneously with attaining the correlations, - I've implemented a code for that too. (I don't know if such "double" task is solvable via a more neat - noniterative - approach such as whuber's.)
Generate a random variable with a defined correlation to an existing variable(s)
Let $X$ be your fixed variable and you want to generate $Y$ variable that correlates with $X$ by amount $r$. If $X$ is standardized then (because $r$ is beta coefficient in simple regression) $Y= rX+E
Generate a random variable with a defined correlation to an existing variable(s) Let $X$ be your fixed variable and you want to generate $Y$ variable that correlates with $X$ by amount $r$. If $X$ is standardized then (because $r$ is beta coefficient in simple regression) $Y= rX+E$, where $E$ is random variable from normal distribution having mean $0$ and $\text{sd}=\sqrt{1-r^2}$. Observed correlation between $X$ and $Y$ data will be approximately $r$; $X$ and $Y$ can be seen as random samples from bivariate normal population (if $X$ is from normal) with $\rho=r$. Now, if you want to attain the correlation in your bivariate sample exactly $r$, you need to provide that $E$ has zero correlation with $X$. This tightening it to zero can be reached by modifying $E$ iteratively. Well, with only two variables, one given ($X$) and one to generate ($Y$), the sufficient number of iterations is actually 1, but with multiple given variables ($X_1, X_2, X_3,...$) iterations will be needed. It should be noted that if $X$ is normal then in the first procedure ("approximate $r$") $Y$ will also be normal; however, in iterative fitting of $Y$ to the "exact $r$" $Y$ is likely to lose normality because the fitting exploits case values selectively. Update Nov 11, 2017. I've come across this old thread today and decided to expand my answer by showing the algorithm of the iterative fitting about which I was speaking initially. Here is an iterative solution how to train a randomly simulated or preexistent variable $Y$ to correlate or covariate precisely as we desire (or very close to so - depending number of iterations) with a set of given variables $X$s (these cannot be modified). Disclamer: This iterative solution I've found inferior to the excellent one based on finding the dual basis and proposed by @whuber in this thread today. @whuber's solution is not iterative and, more importantly for me, it seems to be affecting the values of the input "pig" variable somewhat less than "my" algorithm (it'd be an asset then if the task is to "correct" the existing variable and not to generate random variate from scratch). Still, I'm publishing mine for curiosity and because it works (see also Footnote). So, we have given (fixed) variables $X_1, X_2,...,X_m$, and varible $Y$ which is either just randomly generated "pig" of values or is an existent data variable which values we need to "correct" - to bring $Y$ exactly to correlations (or it can be covariances) $r_1, r_2,...,r_m$ with the $X$s. All data must be continuous; in other words, there should be a good deal of unique values. The idea: perform iterative fitting of residuals. Knowing the wanted (target) correlations/covariances, we may compute predicted values for the $Y$ using the $X$s as multiple linear predictors. After obtaining the initial residuals (from the current $Y$ and the ideal prediction), train them iteratively not to correlate with the predictors. In the end, regain $Y$ with the residuals. (The procedure was my own experimental invention of the wheel many years ago when I knew none of the theory; I coded it then in SPSS.) Convert the target $r$s to sums-of-crossproducts by multiplying them by $\text{df}=n-1$: $S_j=r_j \text{df}$. ($j$ is a $X$ variable index.) Z-standardize all the variables (center each, then divide by the st. deviation computed on that above $\text{df}$). $Y$ and $X$s are thus standard. Observed sums of squares are now = $\text{df}$. Compute regressional coefficients predicting $Y$ by $X$s according to the target $r$s: $\bf b=(X'X)^{-1} S$. Compute predicted values for $Y$: $\hat{Y}=\bf Xb$. Compute residuals $E=Y-\hat{Y}$. Compute the needed (target) sum of squares for residuals: $SS_S=\text{df}-SS_{\hat {Y}}$. (Begin to iterate.) Compute observed sums of crossproducts between current $E$ and every $X_j$: $C_j= \sum_{i=1}^n E_i X_{ij}$ Correct values of $E$ in the aim to bring all $C$s closer to $0$ ($i$ is a case index): $$E_i[\text{corrected}]=E_i-\frac{\sum_{j=1}^m C_j X_{ij}} {n\sum_{j=1}^m X_{ij}^2}$$ (the denominator doesn't change on iterations, compute it in advance) Or, alternatively, a more efficient formula additionally insures the mean of $E$ becomes $0$. First, do center $E$ at each iteration prior computation of the $C$s at step 7, then on this step 8 correct as: $$E_i[\text{corrected}]=E_i-\frac{\sum_{j=1}^m \frac{C_j X_{ij}^3}{\sum_{i=1}^n X_{ij}^2}} {\sum_{j=1}^m X_{ij}^2}$$ (again, denominators are known in advance)$^1$ Bring $SS_E$ to its target value: $E_i[\text{corrected}]=E_i \sqrt{SS_S/SS_E}$ Go to step 7. (Do, say, 10-20 iterations; the greater is $m$ the more iterations could be needed. If target $r$s were realistic, $SS_S$ is positive, and if sample size $n$ isn't too few, iterations always direct to convergence. End iterating.) Ready: All the $C$s are almost zero now which means the residuals $E$ has been trained to restore target $r$s. Compute the fitting $Y$: $Y[\text{corrected}]=\hat{Y}+E$. The obtained $Y$ is almost standardized. As a last stroke, you may want to standardize it precisely, again like you did it on step 2. You may supply $Y$ with any variance and mean you like. Actually, among the four statistics - min, max, mean, st. dev. - you may select any two values and linearly transform the variable so it posesses them without altering the $r$s (correlations) you've attained (it is all called linear rescaling). To warn again what was said above. With that pulling of $Y$ exactly to the $r$, the output $Y$ does not have to be normally distributed. $^1$ The correction formula can be further sophisticated, for example, to insure greater homoscedasticity (in terms of sums-of-squares) of $Y$ with every $X$ as well, simultaneously with attaining the correlations, - I've implemented a code for that too. (I don't know if such "double" task is solvable via a more neat - noniterative - approach such as whuber's.)
Generate a random variable with a defined correlation to an existing variable(s) Let $X$ be your fixed variable and you want to generate $Y$ variable that correlates with $X$ by amount $r$. If $X$ is standardized then (because $r$ is beta coefficient in simple regression) $Y= rX+E
1,802
Generate a random variable with a defined correlation to an existing variable(s)
I felt like doing some programming, so I took @Adam's deleted answer and decided to write a nice implementation in R. I focus on using a functionally oriented style (i.e. lapply style looping). The general idea is to take two vectors, randomly permute one of the vectors until a certain correlation has been reached between them. This approach is very brute-force, but is simple to implement. First we create a function that randomly permutes the input vector: randomly_permute = function(vec) vec[sample.int(length(vec))] randomly_permute(1:100) [1] 71 34 8 98 3 86 28 37 5 47 88 35 43 100 68 58 67 82 [19] 13 9 61 10 94 29 81 63 14 48 76 6 78 91 74 69 18 12 [37] 1 97 49 66 44 40 65 59 31 54 90 36 41 93 24 11 77 85 [55] 32 79 84 15 89 45 53 22 17 16 92 55 83 42 96 72 21 95 [73] 33 20 87 60 38 7 4 52 27 2 80 99 26 70 50 75 57 19 [91] 73 62 23 25 64 51 30 46 56 39 ...and create some example data vec1 = runif(100) vec2 = runif(100) ...write a function that permutes the input vector, and correlates it to a reference vector: permute_and_correlate = function(vec, reference_vec) { perm_vec = randomly_permute(vec) cor_value = cor(perm_vec, reference_vec) return(list(vec = perm_vec, cor = cor_value)) } permute_and_correlate(vec2, vec1) $vec [1] 0.79072381 0.23440845 0.35554970 0.95114398 0.77785348 0.74418811 [7] 0.47871491 0.55981826 0.08801319 0.35698405 0.52140366 0.73996913 [13] 0.67369873 0.85240338 0.57461506 0.14830718 0.40796732 0.67532970 [19] 0.71901990 0.52031017 0.41357545 0.91780357 0.82437619 0.89799621 [25] 0.07077250 0.12056045 0.46456652 0.21050067 0.30868672 0.55623242 [31] 0.84776853 0.57217746 0.08626022 0.71740151 0.87959539 0.82931652 [37] 0.93903143 0.74439384 0.25931398 0.99006038 0.08939812 0.69356590 [43] 0.29254936 0.02674156 0.77182339 0.30047034 0.91790830 0.45862163 [49] 0.27077191 0.74445997 0.34622648 0.58727094 0.92285322 0.83244284 [55] 0.61397396 0.40616274 0.32203732 0.84003379 0.81109473 0.50573325 [61] 0.86719899 0.45393971 0.19701975 0.63877904 0.11796154 0.26986325 [67] 0.01581969 0.52571331 0.27087693 0.33821824 0.52590383 0.11261002 [73] 0.89840404 0.82685046 0.83349287 0.46724807 0.15345334 0.60854785 [79] 0.78854984 0.95770015 0.89193212 0.18885955 0.34303707 0.87332019 [85] 0.08890968 0.22376395 0.02641979 0.43377516 0.58667068 0.22736077 [91] 0.75948043 0.49734797 0.25235660 0.40125309 0.72147500 0.92423638 [97] 0.27980561 0.71627101 0.07729027 0.05244047 $cor [1] 0.1037542 ...and iterate a thousand times: n_iterations = lapply(1:1000, function(x) permute_and_correlate(vec2, vec1)) Note that R's scoping rules ensure that vec1 and vec2 are found in the global environment, outside the anonymous function used above. So, the permutations are all relative to the original test datasets we generated. Next, we find the maximum correlation: cor_values = sapply(n_iterations, '[[', 'cor') n_iterations[[which.max(cor_values)]] $vec [1] 0.89799621 0.67532970 0.46456652 0.75948043 0.30868672 0.83244284 [7] 0.86719899 0.55623242 0.63877904 0.73996913 0.71901990 0.85240338 [13] 0.81109473 0.52571331 0.82931652 0.60854785 0.19701975 0.26986325 [19] 0.58667068 0.52140366 0.40796732 0.22736077 0.74445997 0.40125309 [25] 0.89193212 0.52031017 0.92285322 0.91790830 0.91780357 0.49734797 [31] 0.07729027 0.11796154 0.69356590 0.95770015 0.74418811 0.43377516 [37] 0.55981826 0.93903143 0.30047034 0.84776853 0.32203732 0.25235660 [43] 0.79072381 0.58727094 0.99006038 0.01581969 0.41357545 0.52590383 [49] 0.27980561 0.50573325 0.92423638 0.11261002 0.89840404 0.15345334 [55] 0.61397396 0.27077191 0.12056045 0.45862163 0.18885955 0.77785348 [61] 0.23440845 0.05244047 0.25931398 0.57217746 0.35554970 0.34622648 [67] 0.21050067 0.08890968 0.84003379 0.95114398 0.83349287 0.82437619 [73] 0.46724807 0.02641979 0.71740151 0.74439384 0.14830718 0.82685046 [79] 0.33821824 0.71627101 0.77182339 0.72147500 0.08801319 0.08626022 [85] 0.87332019 0.34303707 0.45393971 0.47871491 0.29254936 0.08939812 [91] 0.35698405 0.67369873 0.27087693 0.78854984 0.87959539 0.22376395 [97] 0.02674156 0.07077250 0.57461506 0.40616274 $cor [1] 0.3166681 ...or find the closest value to a correlation of 0.2: n_iterations[[which.min(abs(cor_values - 0.2))]] $vec [1] 0.02641979 0.49734797 0.32203732 0.95770015 0.82931652 0.52571331 [7] 0.25931398 0.30047034 0.55981826 0.08801319 0.29254936 0.23440845 [13] 0.12056045 0.89799621 0.57461506 0.99006038 0.27077191 0.08626022 [19] 0.14830718 0.45393971 0.22376395 0.89840404 0.08890968 0.15345334 [25] 0.87332019 0.92285322 0.50573325 0.40796732 0.91780357 0.57217746 [31] 0.52590383 0.84003379 0.52031017 0.67532970 0.83244284 0.95114398 [37] 0.81109473 0.35554970 0.92423638 0.83349287 0.34622648 0.18885955 [43] 0.61397396 0.89193212 0.74445997 0.46724807 0.72147500 0.33821824 [49] 0.71740151 0.75948043 0.52140366 0.69356590 0.41357545 0.21050067 [55] 0.87959539 0.11796154 0.73996913 0.30868672 0.47871491 0.63877904 [61] 0.22736077 0.40125309 0.02674156 0.26986325 0.43377516 0.07077250 [67] 0.79072381 0.08939812 0.86719899 0.55623242 0.60854785 0.71627101 [73] 0.40616274 0.35698405 0.67369873 0.82437619 0.27980561 0.77182339 [79] 0.19701975 0.82685046 0.74418811 0.58667068 0.93903143 0.74439384 [85] 0.46456652 0.85240338 0.34303707 0.45862163 0.91790830 0.84776853 [91] 0.78854984 0.05244047 0.58727094 0.77785348 0.01581969 0.27087693 [97] 0.07729027 0.71901990 0.25235660 0.11261002 $cor [1] 0.2000199 To get a higher correlation, you need to increase the number of iterations.
Generate a random variable with a defined correlation to an existing variable(s)
I felt like doing some programming, so I took @Adam's deleted answer and decided to write a nice implementation in R. I focus on using a functionally oriented style (i.e. lapply style looping). The ge
Generate a random variable with a defined correlation to an existing variable(s) I felt like doing some programming, so I took @Adam's deleted answer and decided to write a nice implementation in R. I focus on using a functionally oriented style (i.e. lapply style looping). The general idea is to take two vectors, randomly permute one of the vectors until a certain correlation has been reached between them. This approach is very brute-force, but is simple to implement. First we create a function that randomly permutes the input vector: randomly_permute = function(vec) vec[sample.int(length(vec))] randomly_permute(1:100) [1] 71 34 8 98 3 86 28 37 5 47 88 35 43 100 68 58 67 82 [19] 13 9 61 10 94 29 81 63 14 48 76 6 78 91 74 69 18 12 [37] 1 97 49 66 44 40 65 59 31 54 90 36 41 93 24 11 77 85 [55] 32 79 84 15 89 45 53 22 17 16 92 55 83 42 96 72 21 95 [73] 33 20 87 60 38 7 4 52 27 2 80 99 26 70 50 75 57 19 [91] 73 62 23 25 64 51 30 46 56 39 ...and create some example data vec1 = runif(100) vec2 = runif(100) ...write a function that permutes the input vector, and correlates it to a reference vector: permute_and_correlate = function(vec, reference_vec) { perm_vec = randomly_permute(vec) cor_value = cor(perm_vec, reference_vec) return(list(vec = perm_vec, cor = cor_value)) } permute_and_correlate(vec2, vec1) $vec [1] 0.79072381 0.23440845 0.35554970 0.95114398 0.77785348 0.74418811 [7] 0.47871491 0.55981826 0.08801319 0.35698405 0.52140366 0.73996913 [13] 0.67369873 0.85240338 0.57461506 0.14830718 0.40796732 0.67532970 [19] 0.71901990 0.52031017 0.41357545 0.91780357 0.82437619 0.89799621 [25] 0.07077250 0.12056045 0.46456652 0.21050067 0.30868672 0.55623242 [31] 0.84776853 0.57217746 0.08626022 0.71740151 0.87959539 0.82931652 [37] 0.93903143 0.74439384 0.25931398 0.99006038 0.08939812 0.69356590 [43] 0.29254936 0.02674156 0.77182339 0.30047034 0.91790830 0.45862163 [49] 0.27077191 0.74445997 0.34622648 0.58727094 0.92285322 0.83244284 [55] 0.61397396 0.40616274 0.32203732 0.84003379 0.81109473 0.50573325 [61] 0.86719899 0.45393971 0.19701975 0.63877904 0.11796154 0.26986325 [67] 0.01581969 0.52571331 0.27087693 0.33821824 0.52590383 0.11261002 [73] 0.89840404 0.82685046 0.83349287 0.46724807 0.15345334 0.60854785 [79] 0.78854984 0.95770015 0.89193212 0.18885955 0.34303707 0.87332019 [85] 0.08890968 0.22376395 0.02641979 0.43377516 0.58667068 0.22736077 [91] 0.75948043 0.49734797 0.25235660 0.40125309 0.72147500 0.92423638 [97] 0.27980561 0.71627101 0.07729027 0.05244047 $cor [1] 0.1037542 ...and iterate a thousand times: n_iterations = lapply(1:1000, function(x) permute_and_correlate(vec2, vec1)) Note that R's scoping rules ensure that vec1 and vec2 are found in the global environment, outside the anonymous function used above. So, the permutations are all relative to the original test datasets we generated. Next, we find the maximum correlation: cor_values = sapply(n_iterations, '[[', 'cor') n_iterations[[which.max(cor_values)]] $vec [1] 0.89799621 0.67532970 0.46456652 0.75948043 0.30868672 0.83244284 [7] 0.86719899 0.55623242 0.63877904 0.73996913 0.71901990 0.85240338 [13] 0.81109473 0.52571331 0.82931652 0.60854785 0.19701975 0.26986325 [19] 0.58667068 0.52140366 0.40796732 0.22736077 0.74445997 0.40125309 [25] 0.89193212 0.52031017 0.92285322 0.91790830 0.91780357 0.49734797 [31] 0.07729027 0.11796154 0.69356590 0.95770015 0.74418811 0.43377516 [37] 0.55981826 0.93903143 0.30047034 0.84776853 0.32203732 0.25235660 [43] 0.79072381 0.58727094 0.99006038 0.01581969 0.41357545 0.52590383 [49] 0.27980561 0.50573325 0.92423638 0.11261002 0.89840404 0.15345334 [55] 0.61397396 0.27077191 0.12056045 0.45862163 0.18885955 0.77785348 [61] 0.23440845 0.05244047 0.25931398 0.57217746 0.35554970 0.34622648 [67] 0.21050067 0.08890968 0.84003379 0.95114398 0.83349287 0.82437619 [73] 0.46724807 0.02641979 0.71740151 0.74439384 0.14830718 0.82685046 [79] 0.33821824 0.71627101 0.77182339 0.72147500 0.08801319 0.08626022 [85] 0.87332019 0.34303707 0.45393971 0.47871491 0.29254936 0.08939812 [91] 0.35698405 0.67369873 0.27087693 0.78854984 0.87959539 0.22376395 [97] 0.02674156 0.07077250 0.57461506 0.40616274 $cor [1] 0.3166681 ...or find the closest value to a correlation of 0.2: n_iterations[[which.min(abs(cor_values - 0.2))]] $vec [1] 0.02641979 0.49734797 0.32203732 0.95770015 0.82931652 0.52571331 [7] 0.25931398 0.30047034 0.55981826 0.08801319 0.29254936 0.23440845 [13] 0.12056045 0.89799621 0.57461506 0.99006038 0.27077191 0.08626022 [19] 0.14830718 0.45393971 0.22376395 0.89840404 0.08890968 0.15345334 [25] 0.87332019 0.92285322 0.50573325 0.40796732 0.91780357 0.57217746 [31] 0.52590383 0.84003379 0.52031017 0.67532970 0.83244284 0.95114398 [37] 0.81109473 0.35554970 0.92423638 0.83349287 0.34622648 0.18885955 [43] 0.61397396 0.89193212 0.74445997 0.46724807 0.72147500 0.33821824 [49] 0.71740151 0.75948043 0.52140366 0.69356590 0.41357545 0.21050067 [55] 0.87959539 0.11796154 0.73996913 0.30868672 0.47871491 0.63877904 [61] 0.22736077 0.40125309 0.02674156 0.26986325 0.43377516 0.07077250 [67] 0.79072381 0.08939812 0.86719899 0.55623242 0.60854785 0.71627101 [73] 0.40616274 0.35698405 0.67369873 0.82437619 0.27980561 0.77182339 [79] 0.19701975 0.82685046 0.74418811 0.58667068 0.93903143 0.74439384 [85] 0.46456652 0.85240338 0.34303707 0.45862163 0.91790830 0.84776853 [91] 0.78854984 0.05244047 0.58727094 0.77785348 0.01581969 0.27087693 [97] 0.07729027 0.71901990 0.25235660 0.11261002 $cor [1] 0.2000199 To get a higher correlation, you need to increase the number of iterations.
Generate a random variable with a defined correlation to an existing variable(s) I felt like doing some programming, so I took @Adam's deleted answer and decided to write a nice implementation in R. I focus on using a functionally oriented style (i.e. lapply style looping). The ge
1,803
Generate a random variable with a defined correlation to an existing variable(s)
Let's solve a more general problem: given variable $Y_1$ how to generate the random variables $Y_2,\dots,Y_n$ with correlation matrix $R$? Solution: get the cholesky decomposition of the correlation matrix $CC^T=R$ create independent random vectors $X_2,\dots,X_n$ of the same length as $Y_1$ Use $Y_1$ as the first column and append the generated randoms to it $Y=CX$, where $Y_i$ - the new random correlated numbers as required, note, that $Y_1$ will not change Python code: import numpy as np import math from scipy.linalg import toeplitz, cholesky from statsmodels.stats.moment_helpers import cov2corr # create the large correlation matrix R p = 4 h = 2/p v = np.linspace(1,-1+h,p) R = cov2corr(toeplitz(v)) # create the first variable T = 1000; y = np.random.randn(T) # generate p-1 correlated randoms X = np.random.randn(T,p) X[:,0] = y C = cholesky(R) Y = np.matmul(X,C) # check that Y didn't change print(np.max(np.abs(Y[:,0]-y))) # check the correlation matrix print(R) print(np.corrcoef(np.transpose(Y))) Test Output: 0.0 [[ 1. 0.5 0. -0.5] [ 0.5 1. 0.5 0. ] [ 0. 0.5 1. 0.5] [-0.5 0. 0.5 1. ]] [[ 1. 0.50261766 0.02553882 -0.46259665] [ 0.50261766 1. 0.51162821 0.05748082] [ 0.02553882 0.51162821 1. 0.51403266] [-0.46259665 0.05748082 0.51403266 1. ]]
Generate a random variable with a defined correlation to an existing variable(s)
Let's solve a more general problem: given variable $Y_1$ how to generate the random variables $Y_2,\dots,Y_n$ with correlation matrix $R$? Solution: get the cholesky decomposition of the correlation
Generate a random variable with a defined correlation to an existing variable(s) Let's solve a more general problem: given variable $Y_1$ how to generate the random variables $Y_2,\dots,Y_n$ with correlation matrix $R$? Solution: get the cholesky decomposition of the correlation matrix $CC^T=R$ create independent random vectors $X_2,\dots,X_n$ of the same length as $Y_1$ Use $Y_1$ as the first column and append the generated randoms to it $Y=CX$, where $Y_i$ - the new random correlated numbers as required, note, that $Y_1$ will not change Python code: import numpy as np import math from scipy.linalg import toeplitz, cholesky from statsmodels.stats.moment_helpers import cov2corr # create the large correlation matrix R p = 4 h = 2/p v = np.linspace(1,-1+h,p) R = cov2corr(toeplitz(v)) # create the first variable T = 1000; y = np.random.randn(T) # generate p-1 correlated randoms X = np.random.randn(T,p) X[:,0] = y C = cholesky(R) Y = np.matmul(X,C) # check that Y didn't change print(np.max(np.abs(Y[:,0]-y))) # check the correlation matrix print(R) print(np.corrcoef(np.transpose(Y))) Test Output: 0.0 [[ 1. 0.5 0. -0.5] [ 0.5 1. 0.5 0. ] [ 0. 0.5 1. 0.5] [-0.5 0. 0.5 1. ]] [[ 1. 0.50261766 0.02553882 -0.46259665] [ 0.50261766 1. 0.51162821 0.05748082] [ 0.02553882 0.51162821 1. 0.51403266] [-0.46259665 0.05748082 0.51403266 1. ]]
Generate a random variable with a defined correlation to an existing variable(s) Let's solve a more general problem: given variable $Y_1$ how to generate the random variables $Y_2,\dots,Y_n$ with correlation matrix $R$? Solution: get the cholesky decomposition of the correlation
1,804
Generate a random variable with a defined correlation to an existing variable(s)
Equivalent Python answer to @caracal 's : import math import numpy as np import scipy.stats as ss from scipy import linalg n = 20 # length of vector rho = 0.6 # desired correlation = cos(angle) theta = math.acos(rho)# corresponding angle mu1=3 sigma1 = 0.5 mu2= 2 sigma2=0.2 x1 = np.random.normal(mu1, sigma1, n) x2 = np.random.normal(mu2, sigma2, n) X = np.vstack((x1,x2)).T Xctr = ss.zscore(X) # centered columns (mean 0) Id = np.diag(np.ones(n)) # identity matrix Q = np.linalg.qr(Xctr)[0][:,0]# QR-decomposition, just matrix Q P = Q.reshape(-1, 1) @ Q.reshape(1, -1) # projection onto space defined by x1 x2o = (Id-P) @ Xctr[ :, 1] # x2ctr made orthogonal to x1ctr Xc2 = np.vstack((Xctr[:,0], x2o)).T # bind to matrix Y = Xc2 @ np.diag(1/np.sum(Xc2**2, axis=0)**0.5) # scale columns to length 1 x = Y[ :, 1] + (1 / math.tan(theta)) * Y[:,0] # final new vector np.corrcoef((x1, x))[0,1] # check correlation = rho
Generate a random variable with a defined correlation to an existing variable(s)
Equivalent Python answer to @caracal 's : import math import numpy as np import scipy.stats as ss from scipy import linalg n = 20 # length of vector rho = 0.6 # desired correlation = cos(angle) theta
Generate a random variable with a defined correlation to an existing variable(s) Equivalent Python answer to @caracal 's : import math import numpy as np import scipy.stats as ss from scipy import linalg n = 20 # length of vector rho = 0.6 # desired correlation = cos(angle) theta = math.acos(rho)# corresponding angle mu1=3 sigma1 = 0.5 mu2= 2 sigma2=0.2 x1 = np.random.normal(mu1, sigma1, n) x2 = np.random.normal(mu2, sigma2, n) X = np.vstack((x1,x2)).T Xctr = ss.zscore(X) # centered columns (mean 0) Id = np.diag(np.ones(n)) # identity matrix Q = np.linalg.qr(Xctr)[0][:,0]# QR-decomposition, just matrix Q P = Q.reshape(-1, 1) @ Q.reshape(1, -1) # projection onto space defined by x1 x2o = (Id-P) @ Xctr[ :, 1] # x2ctr made orthogonal to x1ctr Xc2 = np.vstack((Xctr[:,0], x2o)).T # bind to matrix Y = Xc2 @ np.diag(1/np.sum(Xc2**2, axis=0)**0.5) # scale columns to length 1 x = Y[ :, 1] + (1 / math.tan(theta)) * Y[:,0] # final new vector np.corrcoef((x1, x))[0,1] # check correlation = rho
Generate a random variable with a defined correlation to an existing variable(s) Equivalent Python answer to @caracal 's : import math import numpy as np import scipy.stats as ss from scipy import linalg n = 20 # length of vector rho = 0.6 # desired correlation = cos(angle) theta
1,805
Generate a random variable with a defined correlation to an existing variable(s)
Generate normal variables with SAMPLING covariance matrix as given covsam <- function(nobs,covm, seed=1237) {; library (expm); # nons=number of observations, covm = given covariance matrix ; nvar <- ncol(covm); tot <- nvar*nobs; dat <- matrix(rnorm(tot), ncol=nvar); covmat <- cov(dat); a2 <- sqrtm(solve(covmat)); m2 <- sqrtm(covm); dat2 <- dat %*% a2 %*% m2 ; rc <- cov(dat2);}; cm <- matrix(c(1,0.5,0.1,0.5,1,0.5,0.1,0.5,1),ncol=3); cm; res <- covsam(10,cm) ; res; Generate normal variables with POPULATION covariance matrix as given covpop <- function(nobs,covm, seed=1237) {; library (expm); # nons=number of observations, covm = given covariance matrix; nvar <- ncol(covm); tot <- nvar*nobs; dat <- matrix(rnorm(tot), ncol=nvar); m2 <- sqrtm(covm); dat2 <- dat %*% m2; rc <- cov(dat2); }; cm <- matrix(c(1,0.5,0.1,0.5,1,0.5,0.1,0.5,1),ncol=3); cm; res <- covpop(10,cm); res
Generate a random variable with a defined correlation to an existing variable(s)
Generate normal variables with SAMPLING covariance matrix as given covsam <- function(nobs,covm, seed=1237) {; library (expm); # nons=number of observations, covm = given covarian
Generate a random variable with a defined correlation to an existing variable(s) Generate normal variables with SAMPLING covariance matrix as given covsam <- function(nobs,covm, seed=1237) {; library (expm); # nons=number of observations, covm = given covariance matrix ; nvar <- ncol(covm); tot <- nvar*nobs; dat <- matrix(rnorm(tot), ncol=nvar); covmat <- cov(dat); a2 <- sqrtm(solve(covmat)); m2 <- sqrtm(covm); dat2 <- dat %*% a2 %*% m2 ; rc <- cov(dat2);}; cm <- matrix(c(1,0.5,0.1,0.5,1,0.5,0.1,0.5,1),ncol=3); cm; res <- covsam(10,cm) ; res; Generate normal variables with POPULATION covariance matrix as given covpop <- function(nobs,covm, seed=1237) {; library (expm); # nons=number of observations, covm = given covariance matrix; nvar <- ncol(covm); tot <- nvar*nobs; dat <- matrix(rnorm(tot), ncol=nvar); m2 <- sqrtm(covm); dat2 <- dat %*% m2; rc <- cov(dat2); }; cm <- matrix(c(1,0.5,0.1,0.5,1,0.5,0.1,0.5,1),ncol=3); cm; res <- covpop(10,cm); res
Generate a random variable with a defined correlation to an existing variable(s) Generate normal variables with SAMPLING covariance matrix as given covsam <- function(nobs,covm, seed=1237) {; library (expm); # nons=number of observations, covm = given covarian
1,806
Generate a random variable with a defined correlation to an existing variable(s)
Given: $\rho$ = desired correlation between Y and Z $Z$ = sample Requested: 'Random' sample $Y$ such that $cor(Y, Z) = \rho$ Solution: Let $x_1 =$ scale($Z$), which implies $E(x_1) = 0, Var(x_1) = 1$ Generate a random sample $x_2$ with $E(x_2) = 0$ and $Var(x_2) = 1$ (distribution not important) We then determine scalar $a$ such that $Y = x_1 + a * x_2$ satisfies the requirement. Let $cv = cov(x_1, x_2)$. We then have: \begin{eqnarray*} Var(Y) &=& Var(x_1 + a * x_2) \\ &=& Var(x_1) + a^2 * Var(x_2) + 2 * a * cov(x_1, x_2) \\ &=& 1 + a^2 + 2*a*cv \\ cor(Y, Z) &=& cor(Y, x_1) \\ &=& cov(x_1 + a * x_2, x_1) / sqrt(Var(Y)*Var(x_1)) \\ &=& [Var(x_1) + a * cov(x_2, x_1)] / sqrt(1 + a^2 + 2*a*cv)\\ &=& (1 + a * cv) / sqrt(1 + a^2 + 2*a*cv) \end{eqnarray*} So \begin{eqnarray*} \rho * sqrt(1 + a^2 + 2*a*cv) &=& (1 + a * cv) \\ \rho^2 * (1 + a^2 + 2 * a * cv) &=& (1 + a * cv)^2 \\ \rho^2 + \rho^2 * a^2 + 2 * cv * \rho^2 * a &=& 1 + 2 * cv * a + cv^2 * a^2\\ a^2 * (\rho^2 - cv^2) + 2 * a * (cv * \rho^2 - cv) + \rho^2 - 1 &=& 0\\ a^2 * (\rho^2 - cv^2) - 2 * a * cv * (1 - \rho^2) - (1 - \rho^2) &=& 0\\ \end{eqnarray*} We remember from the second line above that $sign(\rho) = sign(1 + a * cv)$. Solving the quadratic equation: \begin{eqnarray*} \Delta &=& cv^2 * (1 - \rho^2)^2 + (1 - \rho^2) * (\rho^2 - cv^2) \\ &=& (1 - \rho^2) * [cv^2 * (1 - \rho^2) + \rho^2 - cv^2] \\ &=& (1 - \rho^2) * \rho^2 * (1 - cv^2) \\ a &=& [cv * (1 - \rho^2) \pm \sqrt \Delta ] / [\rho^2-cv^2] \\ \end{eqnarray*} This gives an r-function, about 2.5 times faster then the beautiful 'complement' function (solution by whuber):\ corr <- function(z, rho){ \ x1 <- c(scale(z)) \ x2 <- scale(rnorm(length(x1))) \ cv <-c(cov(x2, x1)) \ sqrtdelta <- sqrt(rho^2 * (1 - rho^2) * (1 - cv^2)) \ a <- (sqrtdelta + cv * (1-rho^2)) / (rho^2 - cv^2) \ if (correlation * (1 + a * cv) < 0) a <- (-sqrtdelta + cv * (1-rho^2)) / (rho^2 - cv^2) \ a * x2 + x1 \ }
Generate a random variable with a defined correlation to an existing variable(s)
Given: $\rho$ = desired correlation between Y and Z $Z$ = sample Requested: 'Random' sample $Y$ such that $cor(Y, Z) = \rho$ Solution: Let $x_1 =$ scale($Z$), which implies $E(x_1) = 0, Var(x_1) = 1$
Generate a random variable with a defined correlation to an existing variable(s) Given: $\rho$ = desired correlation between Y and Z $Z$ = sample Requested: 'Random' sample $Y$ such that $cor(Y, Z) = \rho$ Solution: Let $x_1 =$ scale($Z$), which implies $E(x_1) = 0, Var(x_1) = 1$ Generate a random sample $x_2$ with $E(x_2) = 0$ and $Var(x_2) = 1$ (distribution not important) We then determine scalar $a$ such that $Y = x_1 + a * x_2$ satisfies the requirement. Let $cv = cov(x_1, x_2)$. We then have: \begin{eqnarray*} Var(Y) &=& Var(x_1 + a * x_2) \\ &=& Var(x_1) + a^2 * Var(x_2) + 2 * a * cov(x_1, x_2) \\ &=& 1 + a^2 + 2*a*cv \\ cor(Y, Z) &=& cor(Y, x_1) \\ &=& cov(x_1 + a * x_2, x_1) / sqrt(Var(Y)*Var(x_1)) \\ &=& [Var(x_1) + a * cov(x_2, x_1)] / sqrt(1 + a^2 + 2*a*cv)\\ &=& (1 + a * cv) / sqrt(1 + a^2 + 2*a*cv) \end{eqnarray*} So \begin{eqnarray*} \rho * sqrt(1 + a^2 + 2*a*cv) &=& (1 + a * cv) \\ \rho^2 * (1 + a^2 + 2 * a * cv) &=& (1 + a * cv)^2 \\ \rho^2 + \rho^2 * a^2 + 2 * cv * \rho^2 * a &=& 1 + 2 * cv * a + cv^2 * a^2\\ a^2 * (\rho^2 - cv^2) + 2 * a * (cv * \rho^2 - cv) + \rho^2 - 1 &=& 0\\ a^2 * (\rho^2 - cv^2) - 2 * a * cv * (1 - \rho^2) - (1 - \rho^2) &=& 0\\ \end{eqnarray*} We remember from the second line above that $sign(\rho) = sign(1 + a * cv)$. Solving the quadratic equation: \begin{eqnarray*} \Delta &=& cv^2 * (1 - \rho^2)^2 + (1 - \rho^2) * (\rho^2 - cv^2) \\ &=& (1 - \rho^2) * [cv^2 * (1 - \rho^2) + \rho^2 - cv^2] \\ &=& (1 - \rho^2) * \rho^2 * (1 - cv^2) \\ a &=& [cv * (1 - \rho^2) \pm \sqrt \Delta ] / [\rho^2-cv^2] \\ \end{eqnarray*} This gives an r-function, about 2.5 times faster then the beautiful 'complement' function (solution by whuber):\ corr <- function(z, rho){ \ x1 <- c(scale(z)) \ x2 <- scale(rnorm(length(x1))) \ cv <-c(cov(x2, x1)) \ sqrtdelta <- sqrt(rho^2 * (1 - rho^2) * (1 - cv^2)) \ a <- (sqrtdelta + cv * (1-rho^2)) / (rho^2 - cv^2) \ if (correlation * (1 + a * cv) < 0) a <- (-sqrtdelta + cv * (1-rho^2)) / (rho^2 - cv^2) \ a * x2 + x1 \ }
Generate a random variable with a defined correlation to an existing variable(s) Given: $\rho$ = desired correlation between Y and Z $Z$ = sample Requested: 'Random' sample $Y$ such that $cor(Y, Z) = \rho$ Solution: Let $x_1 =$ scale($Z$), which implies $E(x_1) = 0, Var(x_1) = 1$
1,807
Subscript notation in expectations
In an expression where more than one random variables are involved, the symbol $E$ alone does not clarify with respect to which random variable is the expected value "taken". For example $$E[h(X,Y)] =\text{?} \int_{-\infty}^{\infty} h(x,y) f_X(x)\,dx$$ or $$E[h(X,Y)] = \text{?} \int_{-\infty}^\infty h(x,y) f_Y(y)\,dy$$ Neither. When many random variables are involved, and there is no subscript in the $E$ symbol, the expected value is taken with respect to their joint distribution: $$E[h(X,Y)] = \int_{-\infty}^\infty \int_{-\infty}^\infty h(x,y) f_{XY}(x,y) \, dx \, dy$$ When a subscript is present... in some cases it tells us on which variable we should condition. So $$E_X[h(X,Y)] = E[h(X,Y)\mid X] = \int_{-\infty}^\infty h(x,y) f_{h(X,Y)\mid X}(h(x,y)\mid x)\,dy $$ Here, we "integrate out" the $Y$ variable, and we are left with a function of $X$. ...But in other cases, it tells us which marginal density to use for the "averaging" $$E_X[h(X,Y)] = \int_{-\infty}^\infty h(x,y) f_{X}(x) \, dx $$ Here, we "average over" the $X$ variable, and we are left with a function of $Y$. Rather confusing I would say, but who said that scientific notation is totally free of ambiguity or multiple use? You should look how each author defines the use of such symbols.
Subscript notation in expectations
In an expression where more than one random variables are involved, the symbol $E$ alone does not clarify with respect to which random variable is the expected value "taken". For example $$E[h(X,Y)] =
Subscript notation in expectations In an expression where more than one random variables are involved, the symbol $E$ alone does not clarify with respect to which random variable is the expected value "taken". For example $$E[h(X,Y)] =\text{?} \int_{-\infty}^{\infty} h(x,y) f_X(x)\,dx$$ or $$E[h(X,Y)] = \text{?} \int_{-\infty}^\infty h(x,y) f_Y(y)\,dy$$ Neither. When many random variables are involved, and there is no subscript in the $E$ symbol, the expected value is taken with respect to their joint distribution: $$E[h(X,Y)] = \int_{-\infty}^\infty \int_{-\infty}^\infty h(x,y) f_{XY}(x,y) \, dx \, dy$$ When a subscript is present... in some cases it tells us on which variable we should condition. So $$E_X[h(X,Y)] = E[h(X,Y)\mid X] = \int_{-\infty}^\infty h(x,y) f_{h(X,Y)\mid X}(h(x,y)\mid x)\,dy $$ Here, we "integrate out" the $Y$ variable, and we are left with a function of $X$. ...But in other cases, it tells us which marginal density to use for the "averaging" $$E_X[h(X,Y)] = \int_{-\infty}^\infty h(x,y) f_{X}(x) \, dx $$ Here, we "average over" the $X$ variable, and we are left with a function of $Y$. Rather confusing I would say, but who said that scientific notation is totally free of ambiguity or multiple use? You should look how each author defines the use of such symbols.
Subscript notation in expectations In an expression where more than one random variables are involved, the symbol $E$ alone does not clarify with respect to which random variable is the expected value "taken". For example $$E[h(X,Y)] =
1,808
Subscript notation in expectations
I just want to add a follow-up to Alecos' great answer. Sometimes it doesn't matter the exact R.V. (or set of RV) the expectation is over. For instance, $$ E_{X\sim P(X)} [X] = E_{X\sim P(X,Y)}[X] $$ In your particular question, I suspect that because you are given $h(X,Y)$ is linear in X and Y, then you will break it up into the "marginal" expectations $E_X[X]$ and $E_X[Y]$ (and then swap in $Y = X + 1$)
Subscript notation in expectations
I just want to add a follow-up to Alecos' great answer. Sometimes it doesn't matter the exact R.V. (or set of RV) the expectation is over. For instance, $$ E_{X\sim P(X)} [X] = E_{X\sim P(X,Y)}[X] $
Subscript notation in expectations I just want to add a follow-up to Alecos' great answer. Sometimes it doesn't matter the exact R.V. (or set of RV) the expectation is over. For instance, $$ E_{X\sim P(X)} [X] = E_{X\sim P(X,Y)}[X] $$ In your particular question, I suspect that because you are given $h(X,Y)$ is linear in X and Y, then you will break it up into the "marginal" expectations $E_X[X]$ and $E_X[Y]$ (and then swap in $Y = X + 1$)
Subscript notation in expectations I just want to add a follow-up to Alecos' great answer. Sometimes it doesn't matter the exact R.V. (or set of RV) the expectation is over. For instance, $$ E_{X\sim P(X)} [X] = E_{X\sim P(X,Y)}[X] $
1,809
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
I'd like to start by seconding a statement in the question: ... my point is that the questions on unbalanced datasets at CV do not mention such a tradeoff, but treat unbalanced classes as a self-evident evil, completely apart from any costs of sample collection. I also have the same concern, my questions here and here are intended to invite counter-evidence that it is a "self-evident evil" the lack of answers (even with a bounty) suggests it isn't. A lot of blog posts and academic papers don't make this clear either. Classifiers can have a problem with imbalanced datasets, but only where the dataset is very small, so my answer is concerned with exceptional cases, and does not justify resampling the dataset in general. There is a class imbalance problem, but it is not caused by the imbalance per se, but because there are too few examples of the minority class to adequately describe it's statistical distribution. As mentioned in the question, this means that the parameter estimates can have high variance, which is true, but that can give rise to a bias in favour of the majority class (rather than affecting both classes equally). In the case of logistic regression, this is discussed by King and Zeng, 3 Gary King and Langche Zeng. 2001. “Logistic Regression in Rare Events Data.” Political Analysis, 9, Pp. 137–163. https://j.mp/2oSEnmf [In my experiments I have found that sometimes there can be a bias in favour of the minority class, but that is caused by wild over-fitting where the class-overlap dissapears due to random sampling, so that doesn't really count and (Bayesian) regularisation ought to fix that] The good thing is that MLE is asymptotically unbiased, so we can expect this bias against the minority class to go away as the overall size of the dataset increases, regardless of the imbalance. As this is an estimation problem, anything that makes estimation more difficult (e.g. high dimensionality) seems likely to make the class imbalance problem worse. Note that probabilistic classifiers (such as logistic regression) and proper scoring rules will not solve this problem as "popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events" 3. This means that your probability estimates will not be well calibrated, so you will have to do things like adjust the threshold (which is equivalent to re-sampling or re-weighting the data). So if we look at a logistic regression model with 10,000 samples, we should not expect to see an imbalance problem as adding more data tends to fix most estimation problems. So an imbalance might be problematic, if you have an extreme imbalance and the dataset is small (and/or high dimensional etc.), but in that case it may be difficult to do much about it (as you don't have enough data to estimate how big a correction to the sampling is needed to correct the bias). If you have lots of data, the only reason to resample is because operational class frequencies are different to those in the training set or different misclassification costs etc. (if either are unknown or variable, your really ought to use a probabilistic classifier). This is mostly a stub, I hope to be able to add more to it later.
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
I'd like to start by seconding a statement in the question: ... my point is that the questions on unbalanced datasets at CV do not mention such a tradeoff, but treat unbalanced classes as a self-evid
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? I'd like to start by seconding a statement in the question: ... my point is that the questions on unbalanced datasets at CV do not mention such a tradeoff, but treat unbalanced classes as a self-evident evil, completely apart from any costs of sample collection. I also have the same concern, my questions here and here are intended to invite counter-evidence that it is a "self-evident evil" the lack of answers (even with a bounty) suggests it isn't. A lot of blog posts and academic papers don't make this clear either. Classifiers can have a problem with imbalanced datasets, but only where the dataset is very small, so my answer is concerned with exceptional cases, and does not justify resampling the dataset in general. There is a class imbalance problem, but it is not caused by the imbalance per se, but because there are too few examples of the minority class to adequately describe it's statistical distribution. As mentioned in the question, this means that the parameter estimates can have high variance, which is true, but that can give rise to a bias in favour of the majority class (rather than affecting both classes equally). In the case of logistic regression, this is discussed by King and Zeng, 3 Gary King and Langche Zeng. 2001. “Logistic Regression in Rare Events Data.” Political Analysis, 9, Pp. 137–163. https://j.mp/2oSEnmf [In my experiments I have found that sometimes there can be a bias in favour of the minority class, but that is caused by wild over-fitting where the class-overlap dissapears due to random sampling, so that doesn't really count and (Bayesian) regularisation ought to fix that] The good thing is that MLE is asymptotically unbiased, so we can expect this bias against the minority class to go away as the overall size of the dataset increases, regardless of the imbalance. As this is an estimation problem, anything that makes estimation more difficult (e.g. high dimensionality) seems likely to make the class imbalance problem worse. Note that probabilistic classifiers (such as logistic regression) and proper scoring rules will not solve this problem as "popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events" 3. This means that your probability estimates will not be well calibrated, so you will have to do things like adjust the threshold (which is equivalent to re-sampling or re-weighting the data). So if we look at a logistic regression model with 10,000 samples, we should not expect to see an imbalance problem as adding more data tends to fix most estimation problems. So an imbalance might be problematic, if you have an extreme imbalance and the dataset is small (and/or high dimensional etc.), but in that case it may be difficult to do much about it (as you don't have enough data to estimate how big a correction to the sampling is needed to correct the bias). If you have lots of data, the only reason to resample is because operational class frequencies are different to those in the training set or different misclassification costs etc. (if either are unknown or variable, your really ought to use a probabilistic classifier). This is mostly a stub, I hope to be able to add more to it later.
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? I'd like to start by seconding a statement in the question: ... my point is that the questions on unbalanced datasets at CV do not mention such a tradeoff, but treat unbalanced classes as a self-evid
1,810
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
I generally agree with your premise that there is an over-fixation on balancing classes, and that it is usually not necessary to do so. Your examples of when it is appropriate to do so are goods ones. However, I disagree with your statement: I conclude that unbalanced classes are not a problem, and that oversampling does not alleviate this non-problem, but gratuitously introduces bias and worse predictions. The problem in your predictions is not the oversampling procedure, it is the failure to correct for the fact that the base-rate for positives in the "over-sampled" (50/50) regression is 50%, while in the data it is closer to 2%. Following King and Zeng ("Logistic Regression in Rare Events Data", 2001, Political Analysis, PDF here), let the population base rate be given by $\tau$. We estimate $\tau$ as the proportion of positives in the training sample: $$ \tau = \frac{1}{N}\sum_{i=1}^N y_i $$ And let $\bar{y}$ be the proportion of positives in the over-sampled set, $\bar{y}=0.5$. This is by construction since you use a balanced 50/50 sample in the over-sampled regression. Then, after using the predict command to generate predicted probabilities $P(y|x,d)$ we adjust these probabilities using the formula in King and Zeng, appendix B.2 to find the probability under the population base rate. This probability is given by $P(y=1|x,d)A_1B$. In the case of two classes: $$ P(y=1|x,d)A_1B = \frac{P(y=1|x,d) \frac{\tau}{\bar{y}}}{P(y=1|x,d) \frac{\tau}{\bar{y}} + P(y=0|x,d) \frac{1-\tau}{1-\bar{y}}} $$ Since $\bar{y}=0.5$ this simplifies to: $$ P(y=1|x,d)A_1B = \frac{P(y=1|x,d) \tau}{P(y=1|x,d) \tau + P(y=0|x,d) (1-\tau)} $$ Modifying your code in the relevant places, we now have very similar Brier scores between the two approaches, despite the fact that the over-sampled training sample uses an order of magnitude less data than the raw training sample (in most cases, roughly 450 data points vs. 10,000). So, in this Monte Carlo study, we see that balancing the training sample does not harm predictive accuracy (as judged by Brier score), but it also does not provide any meaningful increase in accuracy. The only benefit of balancing the training sample in this particular application is to reduce the computational burden of estimating the binary predictor. In the present case, we only need ~450 data points instead of 10,000. The reduction in computational burden would be much more substantial if we were dealing with millions of observations in the raw data. The modified code is given below: library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_randomForest <- brier_score_randomForest_oversampled <- rep(NA, n_sims) pb <- txtProgressBar(max=n_sims) for ( ii in 1:n_sims ) { setTxtProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") # Adjust probabilities based on appendix B.2 in King and Zeng (2001) p1_tau1 = prediction_logistic_oversampled*(incidence_train[ii]/nn_train) p0_tau0 = (1-prediction_logistic_oversampled)*(1-incidence_train[ii]/nn_train) prediction_logistic_oversampled_adj <- p1_tau1/(p1_tau1+p0_tau0) brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled_adj - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") # Adjust probabilities based on appendix B.2 in King and Zeng (2001) p1_tau1 = prediction_randomForest_oversampled*(incidence_train[ii]/nn_train) p0_tau0 = (1-prediction_randomForest_oversampled)*(1-incidence_train[ii]/nn_train) prediction_randomForest_oversampled_adj <- p1_tau1/(p1_tau1+p0_tau0) brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled_adj[, 2] - (outcome_test==TRUE))^2) } close(pb) hist(incidence_train, breaks=seq(min(incidence_train)-.5, max(incidence_train) + .5), col="lightgray", main=paste("Minority class incidence out of", nn_train,"training samples"), xlab="") ylim <- range(c(model_logistic_coefficients, model_logistic_oversampled_coefficients)) beanplot(data.frame(model_logistic_coefficients), what=c(0,1,0,0), col="lightgray", xaxt="n", ylim=ylim, main="Logistic regression: estimated coefficients") axis(1, at=seq_along(true_coefficients), c("Intercept", paste("Predictor", 1:(length(true_coefficients) - 1))), las=3) points(true_coefficients, pch=23, bg="red") beanplot(data.frame(model_logistic_oversampled_coefficients), what=c(0, 1, 0, 0), col="lightgray", xaxt="n", ylim=ylim, main="Logistic regression (oversampled): estimated coefficients") axis(1, at=seq_along(true_coefficients), c("Intercept", paste("Predictor", 1:(length(true_coefficients) - 1))), las=3) points(true_coefficients, pch=23, bg="red") beanplot(data.frame(Raw=brier_score_logistic, Oversampled=brier_score_logistic_oversampled), what=c(0,1,0,0), col="lightgray", main="Logistic regression: Brier scores") beanplot(data.frame(Raw=brier_score_randomForest, Oversampled=brier_score_randomForest_oversampled), what=c(0,1,0,0), col="lightgray", main="Random Forest: Brier scores")
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
I generally agree with your premise that there is an over-fixation on balancing classes, and that it is usually not necessary to do so. Your examples of when it is appropriate to do so are goods ones.
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? I generally agree with your premise that there is an over-fixation on balancing classes, and that it is usually not necessary to do so. Your examples of when it is appropriate to do so are goods ones. However, I disagree with your statement: I conclude that unbalanced classes are not a problem, and that oversampling does not alleviate this non-problem, but gratuitously introduces bias and worse predictions. The problem in your predictions is not the oversampling procedure, it is the failure to correct for the fact that the base-rate for positives in the "over-sampled" (50/50) regression is 50%, while in the data it is closer to 2%. Following King and Zeng ("Logistic Regression in Rare Events Data", 2001, Political Analysis, PDF here), let the population base rate be given by $\tau$. We estimate $\tau$ as the proportion of positives in the training sample: $$ \tau = \frac{1}{N}\sum_{i=1}^N y_i $$ And let $\bar{y}$ be the proportion of positives in the over-sampled set, $\bar{y}=0.5$. This is by construction since you use a balanced 50/50 sample in the over-sampled regression. Then, after using the predict command to generate predicted probabilities $P(y|x,d)$ we adjust these probabilities using the formula in King and Zeng, appendix B.2 to find the probability under the population base rate. This probability is given by $P(y=1|x,d)A_1B$. In the case of two classes: $$ P(y=1|x,d)A_1B = \frac{P(y=1|x,d) \frac{\tau}{\bar{y}}}{P(y=1|x,d) \frac{\tau}{\bar{y}} + P(y=0|x,d) \frac{1-\tau}{1-\bar{y}}} $$ Since $\bar{y}=0.5$ this simplifies to: $$ P(y=1|x,d)A_1B = \frac{P(y=1|x,d) \tau}{P(y=1|x,d) \tau + P(y=0|x,d) (1-\tau)} $$ Modifying your code in the relevant places, we now have very similar Brier scores between the two approaches, despite the fact that the over-sampled training sample uses an order of magnitude less data than the raw training sample (in most cases, roughly 450 data points vs. 10,000). So, in this Monte Carlo study, we see that balancing the training sample does not harm predictive accuracy (as judged by Brier score), but it also does not provide any meaningful increase in accuracy. The only benefit of balancing the training sample in this particular application is to reduce the computational burden of estimating the binary predictor. In the present case, we only need ~450 data points instead of 10,000. The reduction in computational burden would be much more substantial if we were dealing with millions of observations in the raw data. The modified code is given below: library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_randomForest <- brier_score_randomForest_oversampled <- rep(NA, n_sims) pb <- txtProgressBar(max=n_sims) for ( ii in 1:n_sims ) { setTxtProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") # Adjust probabilities based on appendix B.2 in King and Zeng (2001) p1_tau1 = prediction_logistic_oversampled*(incidence_train[ii]/nn_train) p0_tau0 = (1-prediction_logistic_oversampled)*(1-incidence_train[ii]/nn_train) prediction_logistic_oversampled_adj <- p1_tau1/(p1_tau1+p0_tau0) brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled_adj - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") # Adjust probabilities based on appendix B.2 in King and Zeng (2001) p1_tau1 = prediction_randomForest_oversampled*(incidence_train[ii]/nn_train) p0_tau0 = (1-prediction_randomForest_oversampled)*(1-incidence_train[ii]/nn_train) prediction_randomForest_oversampled_adj <- p1_tau1/(p1_tau1+p0_tau0) brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled_adj[, 2] - (outcome_test==TRUE))^2) } close(pb) hist(incidence_train, breaks=seq(min(incidence_train)-.5, max(incidence_train) + .5), col="lightgray", main=paste("Minority class incidence out of", nn_train,"training samples"), xlab="") ylim <- range(c(model_logistic_coefficients, model_logistic_oversampled_coefficients)) beanplot(data.frame(model_logistic_coefficients), what=c(0,1,0,0), col="lightgray", xaxt="n", ylim=ylim, main="Logistic regression: estimated coefficients") axis(1, at=seq_along(true_coefficients), c("Intercept", paste("Predictor", 1:(length(true_coefficients) - 1))), las=3) points(true_coefficients, pch=23, bg="red") beanplot(data.frame(model_logistic_oversampled_coefficients), what=c(0, 1, 0, 0), col="lightgray", xaxt="n", ylim=ylim, main="Logistic regression (oversampled): estimated coefficients") axis(1, at=seq_along(true_coefficients), c("Intercept", paste("Predictor", 1:(length(true_coefficients) - 1))), las=3) points(true_coefficients, pch=23, bg="red") beanplot(data.frame(Raw=brier_score_logistic, Oversampled=brier_score_logistic_oversampled), what=c(0,1,0,0), col="lightgray", main="Logistic regression: Brier scores") beanplot(data.frame(Raw=brier_score_randomForest, Oversampled=brier_score_randomForest_oversampled), what=c(0,1,0,0), col="lightgray", main="Random Forest: Brier scores")
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? I generally agree with your premise that there is an over-fixation on balancing classes, and that it is usually not necessary to do so. Your examples of when it is appropriate to do so are goods ones.
1,811
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
I fully agree with a marked answer "No problem of imbalance per se" - the problem is Lack of data - when minority class do not form Gaussian distribution & thus minority class just remains to be the outlier in the overall distribution == this is problem for Unsupervised methods. But if are making Supervised learning, knowing target classes & their characteristics (features) in advance -- imbalanced ds could not be a problem IF deal it correctly. -- at least each Cross-Validated subsample (e.g. in ensemble bagging & boosting) should include representatives of both (source) classes for comprehensive approximation of decision boundary Main problem of minority class: "the model cannot model the boundary of these low-density regions well during the learning, resulting in ambiguity and poor generalization". Solution was found in (1) semi-supervised learning method, which generates pseudo-labels on unlabeled data and then train together, OR (2) (when extreme imbalance, as medical) self-supervised pre-training with further main training So, decrease of IMBALANCE Problem can be achieved by means of programming logics or (better) by increasing qty of samples == even without oversampling (artificial injection of fake minority class samples) P.S. some hints: first - "Exactly like we should do feature selection inside the cross validation loop, we should also oversample inside the loop." (source) second "Re-balancing makes sense only in the training set, so as to prevent the classifier from simply and naively classifying all instances as negative for a perceived accuracy of 99%." third "when comparing two binary classifiers, the AUC is one of the criteria that should not fooled by the imbalancedness of the data." forth as alternative - Weighted Cost/Loss Function - "Thanks to the Sklearn, there is a built-in parameter called class_weight in most of the ML algorithms which helps you to balance the contribution of each class." - e.g weighted Sigmoid Cross-Entropy loss for binary CLF fifth see 8th at link - "be creative" sixth Threshold moving & Searching optimal value from a grid
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
I fully agree with a marked answer "No problem of imbalance per se" - the problem is Lack of data - when minority class do not form Gaussian distribution & thus minority class just remains to be the o
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? I fully agree with a marked answer "No problem of imbalance per se" - the problem is Lack of data - when minority class do not form Gaussian distribution & thus minority class just remains to be the outlier in the overall distribution == this is problem for Unsupervised methods. But if are making Supervised learning, knowing target classes & their characteristics (features) in advance -- imbalanced ds could not be a problem IF deal it correctly. -- at least each Cross-Validated subsample (e.g. in ensemble bagging & boosting) should include representatives of both (source) classes for comprehensive approximation of decision boundary Main problem of minority class: "the model cannot model the boundary of these low-density regions well during the learning, resulting in ambiguity and poor generalization". Solution was found in (1) semi-supervised learning method, which generates pseudo-labels on unlabeled data and then train together, OR (2) (when extreme imbalance, as medical) self-supervised pre-training with further main training So, decrease of IMBALANCE Problem can be achieved by means of programming logics or (better) by increasing qty of samples == even without oversampling (artificial injection of fake minority class samples) P.S. some hints: first - "Exactly like we should do feature selection inside the cross validation loop, we should also oversample inside the loop." (source) second "Re-balancing makes sense only in the training set, so as to prevent the classifier from simply and naively classifying all instances as negative for a perceived accuracy of 99%." third "when comparing two binary classifiers, the AUC is one of the criteria that should not fooled by the imbalancedness of the data." forth as alternative - Weighted Cost/Loss Function - "Thanks to the Sklearn, there is a built-in parameter called class_weight in most of the ML algorithms which helps you to balance the contribution of each class." - e.g weighted Sigmoid Cross-Entropy loss for binary CLF fifth see 8th at link - "be creative" sixth Threshold moving & Searching optimal value from a grid
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? I fully agree with a marked answer "No problem of imbalance per se" - the problem is Lack of data - when minority class do not form Gaussian distribution & thus minority class just remains to be the o
1,812
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
Edit to summarize the following arguments and simulations: I propose that balancing by either over-/undersampling or class weights is an advantage during training of gradient descent models that use sampling procedures during training (i.e. subsampling, bootstraping, minibatches etc., as used in e.g. neural networks and gradient boosting). I propose that this is due to an improved signal to noise ratio of the gradient of the loss function which is explained by: Improved Signal (larger gradient of the loss function, as suggested by the first simulation) Reduced noise of the gradient due to sampling in a balanced setting vs. strongly unbalanced (as supported by the second simulation). Original answer: To make my point I have modified your code to include a "0" (or baseline) model for each run, where the first predictor column is removed, thus retaining only the remaining 9 predictors which have no relationship to the outcome (full code below). In the end I calculate the Brier scores for logistic and randomForest models and compare the differences with the full model. The full code is below. When I now compare the change in Brier score from the "0" models to the full original models (which include predictor 1) I observe: > round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) 0% 25% 50% 75% 100% -0.048 -0.038 -0.035 -0.032 -0.020 > round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) 0% 25% 50% 75% 100% -0.323 -0.258 -0.241 -0.216 -0.130 > round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) 0% 25% 50% 75% 100% -0.050 -0.037 -0.032 -0.026 -0.009 > round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) 0% 25% 50% 75% 100% -0.306 -0.272 -0.255 -0.233 -0.152 What seems clear is that for the same predictor the relative change in the Brier score jumps from a median of around 0.035 in an imbalanced setting to a 0.241 in a balanced setting giving a roughly 7x higher gradient for a predictive model vs. a baseline. Additionally when you look at the absolute Brier scores, the baseline model in an unbalanced setting performs much better than the full model in the balanced setting: > round( quantile(brier_score_logistic_0), 5) 0% 25% 50% 75% 100% 0.02050 0.02363 0.02450 0.02545 0.02753 > round( quantile(brier_score_logistic_oversampled), 5) 0% 25% 50% 75% 100% 0.17576 0.18842 0.19294 0.19916 0.23089 Thus concluding that a smaller Brier is better per se will lead to wrong conclusions if say you are comparing datasets with different predictor or outcome prevalences. Overall to me there seem to be two advanteges/problems: Balancing the datasets seems to get you a higher gradient, which should be beneficial for training of gradient descent algorithms (xgboost, neural networks). In my experience without balancing the neural network might just learn to guess the class with the higher probability without learning any data features if the dataset is too unbalanced. Comparability between different studies/patient populations/biomarkers may benefit from measures which are less sensitive to changes in prevalence such as AUC or C-index or maybe a stratified Brier. As the example shows that a strong imbalance diminishes the difference between a baseline model and a predictive model. This works goes to a similar direction: ieeexplore.ieee.org/document/6413859 Edit: To follow up on the discussion in the comments, which partially concerns the error due to sampling for a model trained on an imbalanced vs. a balanced dataset I used a second small modification to the script (full version 2 of the new script below). In this modification the datasets for the testing of the original predictive models is performed on one test set, while the "0" models are tested on a separate "test_set_new", which is generated using the same code. This represents either a new sample from the same population or a new "batch" or "minibatch" or subset of the data as used for training models with gradient descent. Now the "gradient" of the Brier from a non-predictive to a predictive model seems quite revealing: > round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) 0% 25% 50% 75% 100% -0.221 -0.100 -0.052 0.019 0.131 > round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) 0% 25% 50% 75% 100% -0.318 -0.258 -0.242 -0.215 -0.135 > > round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) 0% 25% 50% 75% 100% -0.213 -0.092 -0.046 0.020 0.127 > round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) 0% 25% 50% 75% 100% -0.304 -0.273 -0.255 -0.232 -0.155 > round( mean(brier_score_logistic>brier_score_logistic_0), 3) [1] 0.31 > round( mean(brier_score_randomForest>brier_score_randomForest_0), 3) [1] 0.33 So now in 31-33% of simulations for imbalanced models the Brier score of "0" model is "better" (smaller) than the score of the predictive model, despite a sample size of 10,000! While for models trained on balanced data the gradient of the Brier is consistently in the right direction (predictive models lower than "0" models). This seems to me to be quite clearly due to the sampling variability in the imbalanced setting, where even small variations (individual observation) result in a much stronger variability in performance (as observed above the overall Brier is more strongly affected by prevalence than by actual predictors when trained on an imbalanced dataset). As discussed below I expect that this may strongly affect any sampling approaches during gradient descent training (minibatch, subsampling, etc.), while when using the exactly same dataset during each epoch the effect may be less prominent. The modified version of OP's code: library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_logistic_0 <- brier_score_logistic_oversampled_0 <- brier_score_randomForest <- brier_score_randomForest_oversampled <- brier_score_randomForest_0 <- brier_score_randomForest_oversampled_0 <- rep(NA, n_sims) #pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { print(ii)#setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_0 <- glm(outcome~., dataset_train[,-2], family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_oversampled_0 <- glm(outcome~., dataset_train[index, -2], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_0 <- randomForest(outcome~., dataset_train[,-2]) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) model_randomForest_oversampled_0 <- randomForest(outcome~., dataset_train[,-2], subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_0 <- predict(model_logistic_0, dataset_test[,-2], type="response") brier_score_logistic_0[ii] <- mean((prediction_logistic_0 - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_logistic_oversampled_0 <- predict(model_logistic_oversampled_0, dataset_test[,-2], type="response") brier_score_logistic_oversampled_0[ii] <- mean((prediction_logistic_oversampled_0 - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_0 <- predict(model_randomForest_0, dataset_test[,-2], type="prob") brier_score_randomForest_0[ii] <- mean((prediction_randomForest_0[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) prediction_randomForest_oversampled_0 <- predict(model_randomForest_oversampled_0, dataset_test, type="prob") brier_score_randomForest_oversampled_0[ii] <- mean((prediction_randomForest_oversampled_0[, 2] - (outcome_test==TRUE))^2) } #close(pb) quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0) quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0) quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0) quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0) Version 2: library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_logistic_0 <- brier_score_logistic_oversampled_0 <- brier_score_randomForest <- brier_score_randomForest_oversampled <- brier_score_randomForest_0 <- brier_score_randomForest_oversampled_0 <- rep(NA, n_sims) #pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { print(ii)#setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_0 <- glm(outcome~., dataset_train[,-2], family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_oversampled_0 <- glm(outcome~., dataset_train[index, -2], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_0 <- randomForest(outcome~., dataset_train[,-2]) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) model_randomForest_oversampled_0 <- randomForest(outcome~., dataset_train[,-2], subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) #sampling another testing dataset for "0" model predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test_new <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic_0 <- predict(model_logistic_0, dataset_test_new[,-2], type="response") brier_score_logistic_0[ii] <- mean((prediction_logistic_0 - (outcome_test==TRUE))^2) prediction_logistic_oversampled_0 <- predict(model_logistic_oversampled_0, dataset_test_new[,-2], type="response") brier_score_logistic_oversampled_0[ii] <- mean((prediction_logistic_oversampled_0 - (outcome_test==TRUE))^2) prediction_randomForest_0 <- predict(model_randomForest_0, dataset_test_new[,-2], type="prob") brier_score_randomForest_0[ii] <- mean((prediction_randomForest_0[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled_0 <- predict(model_randomForest_oversampled_0, dataset_test_new, type="prob") brier_score_randomForest_oversampled_0[ii] <- mean((prediction_randomForest_oversampled_0[, 2] - (outcome_test==TRUE))^2) } #close(pb) round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3)
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
Edit to summarize the following arguments and simulations: I propose that balancing by either over-/undersampling or class weights is an advantage during training of gradient descent models that use s
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? Edit to summarize the following arguments and simulations: I propose that balancing by either over-/undersampling or class weights is an advantage during training of gradient descent models that use sampling procedures during training (i.e. subsampling, bootstraping, minibatches etc., as used in e.g. neural networks and gradient boosting). I propose that this is due to an improved signal to noise ratio of the gradient of the loss function which is explained by: Improved Signal (larger gradient of the loss function, as suggested by the first simulation) Reduced noise of the gradient due to sampling in a balanced setting vs. strongly unbalanced (as supported by the second simulation). Original answer: To make my point I have modified your code to include a "0" (or baseline) model for each run, where the first predictor column is removed, thus retaining only the remaining 9 predictors which have no relationship to the outcome (full code below). In the end I calculate the Brier scores for logistic and randomForest models and compare the differences with the full model. The full code is below. When I now compare the change in Brier score from the "0" models to the full original models (which include predictor 1) I observe: > round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) 0% 25% 50% 75% 100% -0.048 -0.038 -0.035 -0.032 -0.020 > round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) 0% 25% 50% 75% 100% -0.323 -0.258 -0.241 -0.216 -0.130 > round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) 0% 25% 50% 75% 100% -0.050 -0.037 -0.032 -0.026 -0.009 > round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) 0% 25% 50% 75% 100% -0.306 -0.272 -0.255 -0.233 -0.152 What seems clear is that for the same predictor the relative change in the Brier score jumps from a median of around 0.035 in an imbalanced setting to a 0.241 in a balanced setting giving a roughly 7x higher gradient for a predictive model vs. a baseline. Additionally when you look at the absolute Brier scores, the baseline model in an unbalanced setting performs much better than the full model in the balanced setting: > round( quantile(brier_score_logistic_0), 5) 0% 25% 50% 75% 100% 0.02050 0.02363 0.02450 0.02545 0.02753 > round( quantile(brier_score_logistic_oversampled), 5) 0% 25% 50% 75% 100% 0.17576 0.18842 0.19294 0.19916 0.23089 Thus concluding that a smaller Brier is better per se will lead to wrong conclusions if say you are comparing datasets with different predictor or outcome prevalences. Overall to me there seem to be two advanteges/problems: Balancing the datasets seems to get you a higher gradient, which should be beneficial for training of gradient descent algorithms (xgboost, neural networks). In my experience without balancing the neural network might just learn to guess the class with the higher probability without learning any data features if the dataset is too unbalanced. Comparability between different studies/patient populations/biomarkers may benefit from measures which are less sensitive to changes in prevalence such as AUC or C-index or maybe a stratified Brier. As the example shows that a strong imbalance diminishes the difference between a baseline model and a predictive model. This works goes to a similar direction: ieeexplore.ieee.org/document/6413859 Edit: To follow up on the discussion in the comments, which partially concerns the error due to sampling for a model trained on an imbalanced vs. a balanced dataset I used a second small modification to the script (full version 2 of the new script below). In this modification the datasets for the testing of the original predictive models is performed on one test set, while the "0" models are tested on a separate "test_set_new", which is generated using the same code. This represents either a new sample from the same population or a new "batch" or "minibatch" or subset of the data as used for training models with gradient descent. Now the "gradient" of the Brier from a non-predictive to a predictive model seems quite revealing: > round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) 0% 25% 50% 75% 100% -0.221 -0.100 -0.052 0.019 0.131 > round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) 0% 25% 50% 75% 100% -0.318 -0.258 -0.242 -0.215 -0.135 > > round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) 0% 25% 50% 75% 100% -0.213 -0.092 -0.046 0.020 0.127 > round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3) 0% 25% 50% 75% 100% -0.304 -0.273 -0.255 -0.232 -0.155 > round( mean(brier_score_logistic>brier_score_logistic_0), 3) [1] 0.31 > round( mean(brier_score_randomForest>brier_score_randomForest_0), 3) [1] 0.33 So now in 31-33% of simulations for imbalanced models the Brier score of "0" model is "better" (smaller) than the score of the predictive model, despite a sample size of 10,000! While for models trained on balanced data the gradient of the Brier is consistently in the right direction (predictive models lower than "0" models). This seems to me to be quite clearly due to the sampling variability in the imbalanced setting, where even small variations (individual observation) result in a much stronger variability in performance (as observed above the overall Brier is more strongly affected by prevalence than by actual predictors when trained on an imbalanced dataset). As discussed below I expect that this may strongly affect any sampling approaches during gradient descent training (minibatch, subsampling, etc.), while when using the exactly same dataset during each epoch the effect may be less prominent. The modified version of OP's code: library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_logistic_0 <- brier_score_logistic_oversampled_0 <- brier_score_randomForest <- brier_score_randomForest_oversampled <- brier_score_randomForest_0 <- brier_score_randomForest_oversampled_0 <- rep(NA, n_sims) #pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { print(ii)#setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_0 <- glm(outcome~., dataset_train[,-2], family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_oversampled_0 <- glm(outcome~., dataset_train[index, -2], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_0 <- randomForest(outcome~., dataset_train[,-2]) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) model_randomForest_oversampled_0 <- randomForest(outcome~., dataset_train[,-2], subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_0 <- predict(model_logistic_0, dataset_test[,-2], type="response") brier_score_logistic_0[ii] <- mean((prediction_logistic_0 - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_logistic_oversampled_0 <- predict(model_logistic_oversampled_0, dataset_test[,-2], type="response") brier_score_logistic_oversampled_0[ii] <- mean((prediction_logistic_oversampled_0 - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_0 <- predict(model_randomForest_0, dataset_test[,-2], type="prob") brier_score_randomForest_0[ii] <- mean((prediction_randomForest_0[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) prediction_randomForest_oversampled_0 <- predict(model_randomForest_oversampled_0, dataset_test, type="prob") brier_score_randomForest_oversampled_0[ii] <- mean((prediction_randomForest_oversampled_0[, 2] - (outcome_test==TRUE))^2) } #close(pb) quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0) quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0) quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0) quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0) Version 2: library(randomForest) library(beanplot) nn_train <- nn_test <- 1e4 n_sims <- 1e2 true_coefficients <- c(-7, 5, rep(0, 9)) incidence_train <- rep(NA, n_sims) model_logistic_coefficients <- model_logistic_oversampled_coefficients <- matrix(NA, nrow=n_sims, ncol=length(true_coefficients)) brier_score_logistic <- brier_score_logistic_oversampled <- brier_score_logistic_0 <- brier_score_logistic_oversampled_0 <- brier_score_randomForest <- brier_score_randomForest_oversampled <- brier_score_randomForest_0 <- brier_score_randomForest_oversampled_0 <- rep(NA, n_sims) #pb <- winProgressBar(max=n_sims) for ( ii in 1:n_sims ) { print(ii)#setWinProgressBar(pb,ii,paste(ii,"of",n_sims)) set.seed(ii) while ( TRUE ) { # make sure we even have the minority # class predictors_train <- matrix( runif(nn_train*(length(true_coefficients) - 1)), nrow=nn_train) logit_train <- cbind(1, predictors_train)%*%true_coefficients probability_train <- 1/(1+exp(-logit_train)) outcome_train <- factor(runif(nn_train) <= probability_train) if ( sum(incidence_train[ii] <- sum(outcome_train==TRUE))>0 ) break } dataset_train <- data.frame(outcome=outcome_train, predictors_train) index <- c(which(outcome_train==TRUE), sample(which(outcome_train==FALSE), sum(outcome_train==TRUE))) model_logistic <- glm(outcome~., dataset_train, family="binomial") model_logistic_0 <- glm(outcome~., dataset_train[,-2], family="binomial") model_logistic_oversampled <- glm(outcome~., dataset_train[index, ], family="binomial") model_logistic_oversampled_0 <- glm(outcome~., dataset_train[index, -2], family="binomial") model_logistic_coefficients[ii, ] <- coefficients(model_logistic) model_logistic_oversampled_coefficients[ii, ] <- coefficients(model_logistic_oversampled) model_randomForest <- randomForest(outcome~., dataset_train) model_randomForest_0 <- randomForest(outcome~., dataset_train[,-2]) model_randomForest_oversampled <- randomForest(outcome~., dataset_train, subset=index) model_randomForest_oversampled_0 <- randomForest(outcome~., dataset_train[,-2], subset=index) predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic <- predict(model_logistic, dataset_test, type="response") brier_score_logistic[ii] <- mean((prediction_logistic - (outcome_test==TRUE))^2) prediction_logistic_oversampled <- predict(model_logistic_oversampled, dataset_test, type="response") brier_score_logistic_oversampled[ii] <- mean((prediction_logistic_oversampled - (outcome_test==TRUE))^2) prediction_randomForest <- predict(model_randomForest, dataset_test, type="prob") brier_score_randomForest[ii] <- mean((prediction_randomForest[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled <- predict(model_randomForest_oversampled, dataset_test, type="prob") brier_score_randomForest_oversampled[ii] <- mean((prediction_randomForest_oversampled[, 2] - (outcome_test==TRUE))^2) #sampling another testing dataset for "0" model predictors_test <- matrix(runif(nn_test * (length(true_coefficients) - 1)), nrow=nn_test) logit_test <- cbind(1, predictors_test)%*%true_coefficients probability_test <- 1/(1+exp(-logit_test)) outcome_test <- factor(runif(nn_test)<=probability_test) dataset_test_new <- data.frame(outcome=outcome_test, predictors_test) prediction_logistic_0 <- predict(model_logistic_0, dataset_test_new[,-2], type="response") brier_score_logistic_0[ii] <- mean((prediction_logistic_0 - (outcome_test==TRUE))^2) prediction_logistic_oversampled_0 <- predict(model_logistic_oversampled_0, dataset_test_new[,-2], type="response") brier_score_logistic_oversampled_0[ii] <- mean((prediction_logistic_oversampled_0 - (outcome_test==TRUE))^2) prediction_randomForest_0 <- predict(model_randomForest_0, dataset_test_new[,-2], type="prob") brier_score_randomForest_0[ii] <- mean((prediction_randomForest_0[,2]-(outcome_test==TRUE))^2) prediction_randomForest_oversampled_0 <- predict(model_randomForest_oversampled_0, dataset_test_new, type="prob") brier_score_randomForest_oversampled_0[ii] <- mean((prediction_randomForest_oversampled_0[, 2] - (outcome_test==TRUE))^2) } #close(pb) round( quantile( (brier_score_logistic - brier_score_logistic_0)/brier_score_logistic_0), 3) round( quantile( (brier_score_logistic_oversampled - brier_score_logistic_oversampled_0)/brier_score_logistic_oversampled_0),3) round( quantile( (brier_score_randomForest - brier_score_randomForest_0)/brier_score_randomForest_0), 3) round( quantile( (brier_score_randomForest_oversampled - brier_score_randomForest_oversampled_0)/brier_score_randomForest_oversampled_0), 3)
Are unbalanced datasets problematic, and (how) does oversampling (purport to) help? Edit to summarize the following arguments and simulations: I propose that balancing by either over-/undersampling or class weights is an advantage during training of gradient descent models that use s
1,813
Under what conditions does correlation imply causation?
Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The twin that goes to the amusement park loses the device, hence the low grade. A good way to get this stuff straight is to think of the structure of Bayesian network that may be generating the measured quantities, as done by Pearl in his book Causality. His basic point is to look for hidden variables. If there is a hidden variable that happens not to vary in the measured sample, then the correlation would not imply causation. Expose all hidden variables and you have causation.
Under what conditions does correlation imply causation?
Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The tw
Under what conditions does correlation imply causation? Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The twin that goes to the amusement park loses the device, hence the low grade. A good way to get this stuff straight is to think of the structure of Bayesian network that may be generating the measured quantities, as done by Pearl in his book Causality. His basic point is to look for hidden variables. If there is a hidden variable that happens not to vary in the measured sample, then the correlation would not imply causation. Expose all hidden variables and you have causation.
Under what conditions does correlation imply causation? Correlation is not sufficient for causation. One can get around the Wikipedia example by imagining that those twins always cheated in their tests by having a device that gives them the answers. The tw
1,814
Under what conditions does correlation imply causation?
I'll just add some additional comments about causality as viewed from an epidemiological perspective. Most of these arguments are taken from Practical Psychiatric Epidemiology, by Prince et al. (2003). Causation, or causality interpretation, are by far the most difficult aspects of epidemiological research. Cohort and cross-sectional studies might both lead to confoundig effects for example. Quoting S. Menard (Longitudinal Research, Sage University Paper 76, 1991), H.B. Asher in Causal Modeling (Sage, 1976) initially proposed the following set of criteria to be fulfilled: The phenomena or variables in question must covary, as indicated for example by differences between experimental and control groups or by nonzero correlation between the two variables. The relationship must not be attributable to any other variable or set of variables, i.e., it must not be spurious, but must persist even when other variables are controlled, as indicated for example by successful randomization in an experimental design (no difference between experimental and control groups prior to treatment) or by a nonzero partial correlation between two variables with other variable held constant. The supposed cause must precede or be simultnaeous with the supposed effect in time, as indicated by the change in the cause occuring no later than the associated change in the effect. While the first two criteria can easily be checked using a cross-sectional or time-ordered cross-sectional study, the latter can only be assessed with longitudinal data, except for biological or genetic characteristics for which temporal order can be assume without longitudinal data. Of course, the situation becomes more complex in case of a non-recursive causal relationship. I also like the following illustration (Chapter 13, in the aforementioned reference) which summarizes the approach promulgated by Hill (1965) which includes 9 different criteria related to causation effect, as also cited by @James. The original article was indeed entitled "The environment and disease: association or causation?" (PDF version). Finally, Chapter 2 of Rothman's most famous book, Modern Epidemiology (1998, Lippincott Williams & Wilkins, 2nd Edition), offers a very complete discussion around causation and causal inference, both from a statistical and philosophical perspective. I'd like to add the following references (roughly taken from an online course in epidemiology) are also very interesting: Swaen, G and van Amelsvoort, L (2009). A weight of evidence approach to causal inference. Journal of Clinical Epidemiology, 62, 270-277. Botti, C, Comba, P, Forastiere, F, and Settimi, L (1996). Causal inference in environmental epidemiology. the role of implicit values. The Science of the Total Environment, 184, 97-101. Weed, DL (2002). Environmental epidemiology. Basics and proof of cause effect. Toxicology, 181-182, 399-403. Franco, EL, Correa, P, Santella, RM, Wu, X, Goodman, SN, and Petersen, GM (2004). Role and limitations of epidemiology in establishing a causal association. Seminars in Cancer Biology, 14, 413–426. Finally, this review offers a larger perspective on causal modeling, Causal inference in statistics: An overview (J Pearl, SS 2009 (3)).
Under what conditions does correlation imply causation?
I'll just add some additional comments about causality as viewed from an epidemiological perspective. Most of these arguments are taken from Practical Psychiatric Epidemiology, by Prince et al. (2003)
Under what conditions does correlation imply causation? I'll just add some additional comments about causality as viewed from an epidemiological perspective. Most of these arguments are taken from Practical Psychiatric Epidemiology, by Prince et al. (2003). Causation, or causality interpretation, are by far the most difficult aspects of epidemiological research. Cohort and cross-sectional studies might both lead to confoundig effects for example. Quoting S. Menard (Longitudinal Research, Sage University Paper 76, 1991), H.B. Asher in Causal Modeling (Sage, 1976) initially proposed the following set of criteria to be fulfilled: The phenomena or variables in question must covary, as indicated for example by differences between experimental and control groups or by nonzero correlation between the two variables. The relationship must not be attributable to any other variable or set of variables, i.e., it must not be spurious, but must persist even when other variables are controlled, as indicated for example by successful randomization in an experimental design (no difference between experimental and control groups prior to treatment) or by a nonzero partial correlation between two variables with other variable held constant. The supposed cause must precede or be simultnaeous with the supposed effect in time, as indicated by the change in the cause occuring no later than the associated change in the effect. While the first two criteria can easily be checked using a cross-sectional or time-ordered cross-sectional study, the latter can only be assessed with longitudinal data, except for biological or genetic characteristics for which temporal order can be assume without longitudinal data. Of course, the situation becomes more complex in case of a non-recursive causal relationship. I also like the following illustration (Chapter 13, in the aforementioned reference) which summarizes the approach promulgated by Hill (1965) which includes 9 different criteria related to causation effect, as also cited by @James. The original article was indeed entitled "The environment and disease: association or causation?" (PDF version). Finally, Chapter 2 of Rothman's most famous book, Modern Epidemiology (1998, Lippincott Williams & Wilkins, 2nd Edition), offers a very complete discussion around causation and causal inference, both from a statistical and philosophical perspective. I'd like to add the following references (roughly taken from an online course in epidemiology) are also very interesting: Swaen, G and van Amelsvoort, L (2009). A weight of evidence approach to causal inference. Journal of Clinical Epidemiology, 62, 270-277. Botti, C, Comba, P, Forastiere, F, and Settimi, L (1996). Causal inference in environmental epidemiology. the role of implicit values. The Science of the Total Environment, 184, 97-101. Weed, DL (2002). Environmental epidemiology. Basics and proof of cause effect. Toxicology, 181-182, 399-403. Franco, EL, Correa, P, Santella, RM, Wu, X, Goodman, SN, and Petersen, GM (2004). Role and limitations of epidemiology in establishing a causal association. Seminars in Cancer Biology, 14, 413–426. Finally, this review offers a larger perspective on causal modeling, Causal inference in statistics: An overview (J Pearl, SS 2009 (3)).
Under what conditions does correlation imply causation? I'll just add some additional comments about causality as viewed from an epidemiological perspective. Most of these arguments are taken from Practical Psychiatric Epidemiology, by Prince et al. (2003)
1,815
Under what conditions does correlation imply causation?
At the heart of your question is the question "when is a relationship causal?" It doesn't just need to be correlation implying (or not) causation. A good book on this topic is called Mostly Harmless Econometrics by Johua Angrist and Jorn-Steffen Pischke. They start from the experimental ideal where we are able to randomise the "treatment" under study in some fashion and then they move onto alternative methods for generating this randomisation in order to draw causal influences. This begins with the study of so called natural experiments. One of the first examples of a natural experiment being used to identify causal relationships is Angrist's 1989 paper on "Lifetime Earnings and the Vietnam Era Draft Lottery." This paper attempts to estimate the effect of military service on lifetime earnings. A key problem with estimating any causal effect is that certain types of people may be more likely to enlist, which may bias any measurement of the relationship. Angrist uses the natural experiment created by the Vietnam draft lottery to effectively "randomly assign" the treatment "military service" to a group of men. So when do we have a causality? Under experimental conditions. When do we get close? Under natural experiments. There are also other techniques that get us close to "causality" i.e. they are much better than simply using statistical control. They include regression discontinuity, difference-in-differences, etc.
Under what conditions does correlation imply causation?
At the heart of your question is the question "when is a relationship causal?" It doesn't just need to be correlation implying (or not) causation. A good book on this topic is called Mostly Harmless E
Under what conditions does correlation imply causation? At the heart of your question is the question "when is a relationship causal?" It doesn't just need to be correlation implying (or not) causation. A good book on this topic is called Mostly Harmless Econometrics by Johua Angrist and Jorn-Steffen Pischke. They start from the experimental ideal where we are able to randomise the "treatment" under study in some fashion and then they move onto alternative methods for generating this randomisation in order to draw causal influences. This begins with the study of so called natural experiments. One of the first examples of a natural experiment being used to identify causal relationships is Angrist's 1989 paper on "Lifetime Earnings and the Vietnam Era Draft Lottery." This paper attempts to estimate the effect of military service on lifetime earnings. A key problem with estimating any causal effect is that certain types of people may be more likely to enlist, which may bias any measurement of the relationship. Angrist uses the natural experiment created by the Vietnam draft lottery to effectively "randomly assign" the treatment "military service" to a group of men. So when do we have a causality? Under experimental conditions. When do we get close? Under natural experiments. There are also other techniques that get us close to "causality" i.e. they are much better than simply using statistical control. They include regression discontinuity, difference-in-differences, etc.
Under what conditions does correlation imply causation? At the heart of your question is the question "when is a relationship causal?" It doesn't just need to be correlation implying (or not) causation. A good book on this topic is called Mostly Harmless E
1,816
Under what conditions does correlation imply causation?
There is also a problem with the opposite case, when lack of correlation is used as a proof for the lack of causation. This problem is nonlinearity; when looking at correlation people usually check Pearson, which is only a tip of an iceberg.
Under what conditions does correlation imply causation?
There is also a problem with the opposite case, when lack of correlation is used as a proof for the lack of causation. This problem is nonlinearity; when looking at correlation people usually check Pe
Under what conditions does correlation imply causation? There is also a problem with the opposite case, when lack of correlation is used as a proof for the lack of causation. This problem is nonlinearity; when looking at correlation people usually check Pearson, which is only a tip of an iceberg.
Under what conditions does correlation imply causation? There is also a problem with the opposite case, when lack of correlation is used as a proof for the lack of causation. This problem is nonlinearity; when looking at correlation people usually check Pe
1,817
Under what conditions does correlation imply causation?
Your example is that of a controlled experiment. The only other context that I know of where a correlation can imply causation is that of a natural experiment. Basically, a natural experiment takes advantage of an assignment of some respondents to a treatment that happens naturally in the real world. Since assignment of respondents to treatment and control groups is not controlled by the experimenter the extent to which correlation would imply causation is perhaps weaker to some extent. See the wiki links for more information controlled / natural experiments.
Under what conditions does correlation imply causation?
Your example is that of a controlled experiment. The only other context that I know of where a correlation can imply causation is that of a natural experiment. Basically, a natural experiment takes a
Under what conditions does correlation imply causation? Your example is that of a controlled experiment. The only other context that I know of where a correlation can imply causation is that of a natural experiment. Basically, a natural experiment takes advantage of an assignment of some respondents to a treatment that happens naturally in the real world. Since assignment of respondents to treatment and control groups is not controlled by the experimenter the extent to which correlation would imply causation is perhaps weaker to some extent. See the wiki links for more information controlled / natural experiments.
Under what conditions does correlation imply causation? Your example is that of a controlled experiment. The only other context that I know of where a correlation can imply causation is that of a natural experiment. Basically, a natural experiment takes a
1,818
Under what conditions does correlation imply causation?
In my opinion the APA Statistical Task force summarised it quite well ''Inferring causality from nonrandomized designs is a risky enterprise. Researchers using nonrandomized designs have an extra obligation to explain the logic behind covariates included in their designs and to alert the reader to plausible rival hypotheses that might explain their results. Even in randomized experiments, attributing causal effects to any one aspect of the treatment condition requires support from additional experimentation.'' - APA Task Force
Under what conditions does correlation imply causation?
In my opinion the APA Statistical Task force summarised it quite well ''Inferring causality from nonrandomized designs is a risky enterprise. Researchers using nonrandomized designs have an ext
Under what conditions does correlation imply causation? In my opinion the APA Statistical Task force summarised it quite well ''Inferring causality from nonrandomized designs is a risky enterprise. Researchers using nonrandomized designs have an extra obligation to explain the logic behind covariates included in their designs and to alert the reader to plausible rival hypotheses that might explain their results. Even in randomized experiments, attributing causal effects to any one aspect of the treatment condition requires support from additional experimentation.'' - APA Task Force
Under what conditions does correlation imply causation? In my opinion the APA Statistical Task force summarised it quite well ''Inferring causality from nonrandomized designs is a risky enterprise. Researchers using nonrandomized designs have an ext
1,819
Under what conditions does correlation imply causation?
Sir Austin Bradford Hill's President's Address to the Royal Society of Medicine (The Environment and Disease: Association or Causation?) explains nine criteria which help to judge whether there is a causal relationship between two correlated or associated variables. They are: Strength of the association Consistency: "has it been repeatedly observed by different persons, in different places, cirumstances and times?" Specificity Temporality: "which is the cart and which is the horse?" - the cause must precede the effect Biological gradient (dose-response curve) - in what way does the magnitude of the effect depended upon the magnitude of the (suspected) causal variable? Plausibility - is there a likely explanation for causation? Coherance - would causation contradict other established facts? Experiment - does experimental manipulation of the (suspected) causal variable affect the (suspected) dependent variable Analogy - have we encountered similar causal relationships in the past?
Under what conditions does correlation imply causation?
Sir Austin Bradford Hill's President's Address to the Royal Society of Medicine (The Environment and Disease: Association or Causation?) explains nine criteria which help to judge whether there is a c
Under what conditions does correlation imply causation? Sir Austin Bradford Hill's President's Address to the Royal Society of Medicine (The Environment and Disease: Association or Causation?) explains nine criteria which help to judge whether there is a causal relationship between two correlated or associated variables. They are: Strength of the association Consistency: "has it been repeatedly observed by different persons, in different places, cirumstances and times?" Specificity Temporality: "which is the cart and which is the horse?" - the cause must precede the effect Biological gradient (dose-response curve) - in what way does the magnitude of the effect depended upon the magnitude of the (suspected) causal variable? Plausibility - is there a likely explanation for causation? Coherance - would causation contradict other established facts? Experiment - does experimental manipulation of the (suspected) causal variable affect the (suspected) dependent variable Analogy - have we encountered similar causal relationships in the past?
Under what conditions does correlation imply causation? Sir Austin Bradford Hill's President's Address to the Royal Society of Medicine (The Environment and Disease: Association or Causation?) explains nine criteria which help to judge whether there is a c
1,820
Under what conditions does correlation imply causation?
In the twins example it is not just the correlation that suggests causality, but also the associated information or prior knowledge. Suppose I add one further piece of information. Assume that the diligent twin spent 6 hours studying for a stats exam, but due to an unfortunate error the exam was in history. Would we still conclude the study was the cause of the superior performance? Determining causality is as much a philosophical question as a scientific one, hence the tendency to invoke philosophers such as David Hume and Karl Popper when causality is discussed. Not surprisingly medicine has made significant contributions to establishing causality through heuristics, such as Koch's postulates for establishing the causal relationship between microbes and disease. These have been extended to "molecular Koch's postulates" required to show that a gene in a pathogen encodes a product that contributes to the disease caused by the pathogen. Unfortunately I can't post a hyperlinks supposedly beCAUSE I'm a new user (not true) and don't have enough "reputation points". The real reason is anybody's guess.
Under what conditions does correlation imply causation?
In the twins example it is not just the correlation that suggests causality, but also the associated information or prior knowledge. Suppose I add one further piece of information. Assume that the di
Under what conditions does correlation imply causation? In the twins example it is not just the correlation that suggests causality, but also the associated information or prior knowledge. Suppose I add one further piece of information. Assume that the diligent twin spent 6 hours studying for a stats exam, but due to an unfortunate error the exam was in history. Would we still conclude the study was the cause of the superior performance? Determining causality is as much a philosophical question as a scientific one, hence the tendency to invoke philosophers such as David Hume and Karl Popper when causality is discussed. Not surprisingly medicine has made significant contributions to establishing causality through heuristics, such as Koch's postulates for establishing the causal relationship between microbes and disease. These have been extended to "molecular Koch's postulates" required to show that a gene in a pathogen encodes a product that contributes to the disease caused by the pathogen. Unfortunately I can't post a hyperlinks supposedly beCAUSE I'm a new user (not true) and don't have enough "reputation points". The real reason is anybody's guess.
Under what conditions does correlation imply causation? In the twins example it is not just the correlation that suggests causality, but also the associated information or prior knowledge. Suppose I add one further piece of information. Assume that the di
1,821
Under what conditions does correlation imply causation?
Correlation alone never implies causation. It's that simple. But it's very rare to have only a correlation between two variables. Often you also know something about what those variables are and a theory, or theories, suggesting why there might be a causal relationship between the variables. If not, then we bother checking for a correlation? (However people mining massive correlation matrices for significant results often have no casual theory - otherwise, why bother mining. A counterargument to that is that often some exploration is needed to get ideas for casual theories. And so on and so on...) A response to the common criticism "Yeah, but that's just a correlation: it doesn't imply causation": For a casual relationship, correlation is necessary. A repeated failure to find a correlation would be bad news indeed. I didn't just give you a correlation. Then go on to explain possible causal mechanisms explaining the correlation...
Under what conditions does correlation imply causation?
Correlation alone never implies causation. It's that simple. But it's very rare to have only a correlation between two variables. Often you also know something about what those variables are and a t
Under what conditions does correlation imply causation? Correlation alone never implies causation. It's that simple. But it's very rare to have only a correlation between two variables. Often you also know something about what those variables are and a theory, or theories, suggesting why there might be a causal relationship between the variables. If not, then we bother checking for a correlation? (However people mining massive correlation matrices for significant results often have no casual theory - otherwise, why bother mining. A counterargument to that is that often some exploration is needed to get ideas for casual theories. And so on and so on...) A response to the common criticism "Yeah, but that's just a correlation: it doesn't imply causation": For a casual relationship, correlation is necessary. A repeated failure to find a correlation would be bad news indeed. I didn't just give you a correlation. Then go on to explain possible causal mechanisms explaining the correlation...
Under what conditions does correlation imply causation? Correlation alone never implies causation. It's that simple. But it's very rare to have only a correlation between two variables. Often you also know something about what those variables are and a t
1,822
Under what conditions does correlation imply causation?
Almost always in randomized trials Almost always in observational study when someone measure all confouders (almost never) Sometimes when someone measure some counfounders (IC* algorithim of DAG discovery in Pearl's book Causality) In non gaussian linear models with two or more variables but not using correlation as measure of relationship (LiNGAM) Most of discovery algorithms are implemented in Tetrad IV
Under what conditions does correlation imply causation?
Almost always in randomized trials Almost always in observational study when someone measure all confouders (almost never) Sometimes when someone measure some counfounders (IC* algorithim of DAG disco
Under what conditions does correlation imply causation? Almost always in randomized trials Almost always in observational study when someone measure all confouders (almost never) Sometimes when someone measure some counfounders (IC* algorithim of DAG discovery in Pearl's book Causality) In non gaussian linear models with two or more variables but not using correlation as measure of relationship (LiNGAM) Most of discovery algorithms are implemented in Tetrad IV
Under what conditions does correlation imply causation? Almost always in randomized trials Almost always in observational study when someone measure all confouders (almost never) Sometimes when someone measure some counfounders (IC* algorithim of DAG disco
1,823
Under what conditions does correlation imply causation?
One useful sufficient condition for some definitions of causation: Causation can be claimed when one of the correlated variables can be controlled (we can directly set its value) and correlation is still present.
Under what conditions does correlation imply causation?
One useful sufficient condition for some definitions of causation: Causation can be claimed when one of the correlated variables can be controlled (we can directly set its value) and correlation is st
Under what conditions does correlation imply causation? One useful sufficient condition for some definitions of causation: Causation can be claimed when one of the correlated variables can be controlled (we can directly set its value) and correlation is still present.
Under what conditions does correlation imply causation? One useful sufficient condition for some definitions of causation: Causation can be claimed when one of the correlated variables can be controlled (we can directly set its value) and correlation is st
1,824
Under what conditions does correlation imply causation?
A related question might be -- under what conditions can you reliably extract causal relations from data? A 2008 NIPS workshop try to address that question empirically. One of the tasks was to infer the direction of causality from observations of pairs of variables where one variable was known to cause another, and the best method was able to correctly extract causal direction 80% of the time.
Under what conditions does correlation imply causation?
A related question might be -- under what conditions can you reliably extract causal relations from data? A 2008 NIPS workshop try to address that question empirically. One of the tasks was to infer t
Under what conditions does correlation imply causation? A related question might be -- under what conditions can you reliably extract causal relations from data? A 2008 NIPS workshop try to address that question empirically. One of the tasks was to infer the direction of causality from observations of pairs of variables where one variable was known to cause another, and the best method was able to correctly extract causal direction 80% of the time.
Under what conditions does correlation imply causation? A related question might be -- under what conditions can you reliably extract causal relations from data? A 2008 NIPS workshop try to address that question empirically. One of the tasks was to infer t
1,825
Under what conditions does correlation imply causation?
Almost surely in a well designed experiment. (Designed, of course, to elicit such a connexion.)
Under what conditions does correlation imply causation?
Almost surely in a well designed experiment. (Designed, of course, to elicit such a connexion.)
Under what conditions does correlation imply causation? Almost surely in a well designed experiment. (Designed, of course, to elicit such a connexion.)
Under what conditions does correlation imply causation? Almost surely in a well designed experiment. (Designed, of course, to elicit such a connexion.)
1,826
Under what conditions does correlation imply causation?
Suppose we think the factor A is the cause of the phenomenon B. Then we try to vary it to see whether B changes. If B doesn't change and if we can assume that everything else unchanged, strong evidence that A is not the cause of B. If B does change, we can't conclude that A is the cause because the change of A might have caused a change in the actual causation C, which made B change.
Under what conditions does correlation imply causation?
Suppose we think the factor A is the cause of the phenomenon B. Then we try to vary it to see whether B changes. If B doesn't change and if we can assume that everything else unchanged, strong eviden
Under what conditions does correlation imply causation? Suppose we think the factor A is the cause of the phenomenon B. Then we try to vary it to see whether B changes. If B doesn't change and if we can assume that everything else unchanged, strong evidence that A is not the cause of B. If B does change, we can't conclude that A is the cause because the change of A might have caused a change in the actual causation C, which made B change.
Under what conditions does correlation imply causation? Suppose we think the factor A is the cause of the phenomenon B. Then we try to vary it to see whether B changes. If B doesn't change and if we can assume that everything else unchanged, strong eviden
1,827
Under what conditions does correlation imply causation?
I noticed that 'proof' was used here when discussing the empirical paradigm. There is no such thing. First comes the hypothesis, where the idea is advanced; then comes testing, under "controlled conditions"[note a] and if "sufficient" lack of disproof is encountered, it advances to the stage of hypothesis...period. There is no proof, unless one can 1) manage to be at every occurrence of said event [note b] and of course 2) establish causation. 1) is improbable in an infinite universe [note infinity by nature cannot be proven]. Note A; no experiment is taken under totally controlled conditions and the more controlled the conditions are the less the resemblance to the outside universe with apparently infinite lines of causation. Note b; mind you, you have to have described said 'event' perfectly, which presumably means a perfectly correct language=presumably not a human language. For a final note, all causation presumably goes back to the First Event. Now go talk to everyone with a theory. Yes, I have studied formally and informally. At the end; no, proximity does not imply causation nor even anything other than temporary correlation. The timespan of a mountain (given that they are alive; prove they aren't) and therefore the perception...is not that of a (wo)man.
Under what conditions does correlation imply causation?
I noticed that 'proof' was used here when discussing the empirical paradigm. There is no such thing. First comes the hypothesis, where the idea is advanced; then comes testing, under "controlled con
Under what conditions does correlation imply causation? I noticed that 'proof' was used here when discussing the empirical paradigm. There is no such thing. First comes the hypothesis, where the idea is advanced; then comes testing, under "controlled conditions"[note a] and if "sufficient" lack of disproof is encountered, it advances to the stage of hypothesis...period. There is no proof, unless one can 1) manage to be at every occurrence of said event [note b] and of course 2) establish causation. 1) is improbable in an infinite universe [note infinity by nature cannot be proven]. Note A; no experiment is taken under totally controlled conditions and the more controlled the conditions are the less the resemblance to the outside universe with apparently infinite lines of causation. Note b; mind you, you have to have described said 'event' perfectly, which presumably means a perfectly correct language=presumably not a human language. For a final note, all causation presumably goes back to the First Event. Now go talk to everyone with a theory. Yes, I have studied formally and informally. At the end; no, proximity does not imply causation nor even anything other than temporary correlation. The timespan of a mountain (given that they are alive; prove they aren't) and therefore the perception...is not that of a (wo)man.
Under what conditions does correlation imply causation? I noticed that 'proof' was used here when discussing the empirical paradigm. There is no such thing. First comes the hypothesis, where the idea is advanced; then comes testing, under "controlled con
1,828
Under what conditions does correlation imply causation?
I read all the answers. Some useful insight are given but no one answer seems me decisive, neither the accepted one (it can be correct but too vague). I offered here (Under which assumptions a regression can be interpreted causally?) an answer about: Under which assumptions a [linear] regression can be interpreted causally? I think that it give a decisive answer to the question. Now, it can be showed that any linear regression coefficients can be converted in a linear correlation (total or partial). As a consequence in my linked answer there is also the answer to the question: Under what conditions does [linear] correlation imply causation? Finally, if we are interested in in non linear relationships, the core of the answer remain the same but the math become harder.
Under what conditions does correlation imply causation?
I read all the answers. Some useful insight are given but no one answer seems me decisive, neither the accepted one (it can be correct but too vague). I offered here (Under which assumptions a regress
Under what conditions does correlation imply causation? I read all the answers. Some useful insight are given but no one answer seems me decisive, neither the accepted one (it can be correct but too vague). I offered here (Under which assumptions a regression can be interpreted causally?) an answer about: Under which assumptions a [linear] regression can be interpreted causally? I think that it give a decisive answer to the question. Now, it can be showed that any linear regression coefficients can be converted in a linear correlation (total or partial). As a consequence in my linked answer there is also the answer to the question: Under what conditions does [linear] correlation imply causation? Finally, if we are interested in in non linear relationships, the core of the answer remain the same but the math become harder.
Under what conditions does correlation imply causation? I read all the answers. Some useful insight are given but no one answer seems me decisive, neither the accepted one (it can be correct but too vague). I offered here (Under which assumptions a regress
1,829
Under what conditions does correlation imply causation?
If you want to determine whether $X$ causes $Y$, and you run the regression $Y = bX + u$ Then $b$ is an unbiased estimator of the causal effect of $X$ on $Y$ (that is $\mathrm{E}(b)=B$) if and only if there is no correlation between $X$ and $u$, that is $\mathrm{E}(u|X)=0$. This is because $u$ can be thought of as anything else that causes $Y$. And so if this assumption holds, b is an unbiased estimate of the effect of $X$ on $Y$ ceteris paribus (other things being equal). Being unbiased is a desirable property of an estimator, but you would also want your estimator to be efficient (low variance) and consistent (tends in probability to true value). See Gauss-Markov assumptions.
Under what conditions does correlation imply causation?
If you want to determine whether $X$ causes $Y$, and you run the regression $Y = bX + u$ Then $b$ is an unbiased estimator of the causal effect of $X$ on $Y$ (that is $\mathrm{E}(b)=B$) if and only if
Under what conditions does correlation imply causation? If you want to determine whether $X$ causes $Y$, and you run the regression $Y = bX + u$ Then $b$ is an unbiased estimator of the causal effect of $X$ on $Y$ (that is $\mathrm{E}(b)=B$) if and only if there is no correlation between $X$ and $u$, that is $\mathrm{E}(u|X)=0$. This is because $u$ can be thought of as anything else that causes $Y$. And so if this assumption holds, b is an unbiased estimate of the effect of $X$ on $Y$ ceteris paribus (other things being equal). Being unbiased is a desirable property of an estimator, but you would also want your estimator to be efficient (low variance) and consistent (tends in probability to true value). See Gauss-Markov assumptions.
Under what conditions does correlation imply causation? If you want to determine whether $X$ causes $Y$, and you run the regression $Y = bX + u$ Then $b$ is an unbiased estimator of the causal effect of $X$ on $Y$ (that is $\mathrm{E}(b)=B$) if and only if
1,830
Understanding "variance" intuitively
I would probably use a similar analogy to the one I've learned to give 'laypeople' when introducing the concept of bias and variance: the dartboard analogy. See below: The particular image above is from Encyclopedia of Machine Learning, and the reference within the image is Moore and McCabe's "Introduction to the Practice of Statistics". EDIT: Here's an exercise that I believe is pretty intuitive: Take a deck of cards (out of the box), and drop the deck from a height of about 1 foot. Ask your child to pick up the cards and return them to you. Then, instead of dropping the deck, toss it as high as you can and let the cards fall to the ground. Ask your child to pick up the cards and return them to you. The relative fun they have during the two trials should give them an intuitive feel for variance :)
Understanding "variance" intuitively
I would probably use a similar analogy to the one I've learned to give 'laypeople' when introducing the concept of bias and variance: the dartboard analogy. See below: The particular image above is
Understanding "variance" intuitively I would probably use a similar analogy to the one I've learned to give 'laypeople' when introducing the concept of bias and variance: the dartboard analogy. See below: The particular image above is from Encyclopedia of Machine Learning, and the reference within the image is Moore and McCabe's "Introduction to the Practice of Statistics". EDIT: Here's an exercise that I believe is pretty intuitive: Take a deck of cards (out of the box), and drop the deck from a height of about 1 foot. Ask your child to pick up the cards and return them to you. Then, instead of dropping the deck, toss it as high as you can and let the cards fall to the ground. Ask your child to pick up the cards and return them to you. The relative fun they have during the two trials should give them an intuitive feel for variance :)
Understanding "variance" intuitively I would probably use a similar analogy to the one I've learned to give 'laypeople' when introducing the concept of bias and variance: the dartboard analogy. See below: The particular image above is
1,831
Understanding "variance" intuitively
I used to teach statistics to a layman by jokes, and I found they learn a lot. Suppose for variance or standard deviation the following joke is quite useful: Joke Once two statistician of height 4 feet and 5 feet have to cross a river of AVERAGE depth 3 feet. Meanwhile, a third statistician comes and said, "what are you waiting for? You can easily cross the river" I am assuming that layman know about 'average' term. You can also ask them the same question that would they cross the river in this situation? What are they missing that is 'variance' to decide "what to do in the situation?" It's all about your presentation skills. However, jokes help a lot to the layman who wants to understand statistics. I hope it helps!
Understanding "variance" intuitively
I used to teach statistics to a layman by jokes, and I found they learn a lot. Suppose for variance or standard deviation the following joke is quite useful: Joke Once two statistician of height 4 fee
Understanding "variance" intuitively I used to teach statistics to a layman by jokes, and I found they learn a lot. Suppose for variance or standard deviation the following joke is quite useful: Joke Once two statistician of height 4 feet and 5 feet have to cross a river of AVERAGE depth 3 feet. Meanwhile, a third statistician comes and said, "what are you waiting for? You can easily cross the river" I am assuming that layman know about 'average' term. You can also ask them the same question that would they cross the river in this situation? What are they missing that is 'variance' to decide "what to do in the situation?" It's all about your presentation skills. However, jokes help a lot to the layman who wants to understand statistics. I hope it helps!
Understanding "variance" intuitively I used to teach statistics to a layman by jokes, and I found they learn a lot. Suppose for variance or standard deviation the following joke is quite useful: Joke Once two statistician of height 4 fee
1,832
Understanding "variance" intuitively
I disagree with a lot of the answers advocating people to purely think of variance as spread. As smart people (Nassim Taleb) have pointed out, when people think of variance as spread they just assume it is MAD. Variance is a description of how far members are from the mean, AND it judges each observation's importance by this same distance. This means observations far away are judged more importantly. Hence squares. I think the variance of a continuous uniform variable is the easiest to picture. Each observation can have a square drawn to it. Stacking these squares creates a pyramid. Cut the pyramid in half so half the weight is in one side and half is in the other. The face where you cut it is the variance.
Understanding "variance" intuitively
I disagree with a lot of the answers advocating people to purely think of variance as spread. As smart people (Nassim Taleb) have pointed out, when people think of variance as spread they just assume
Understanding "variance" intuitively I disagree with a lot of the answers advocating people to purely think of variance as spread. As smart people (Nassim Taleb) have pointed out, when people think of variance as spread they just assume it is MAD. Variance is a description of how far members are from the mean, AND it judges each observation's importance by this same distance. This means observations far away are judged more importantly. Hence squares. I think the variance of a continuous uniform variable is the easiest to picture. Each observation can have a square drawn to it. Stacking these squares creates a pyramid. Cut the pyramid in half so half the weight is in one side and half is in the other. The face where you cut it is the variance.
Understanding "variance" intuitively I disagree with a lot of the answers advocating people to purely think of variance as spread. As smart people (Nassim Taleb) have pointed out, when people think of variance as spread they just assume
1,833
Understanding "variance" intuitively
I would focus on the standard deviation rather than the variance; the variance is on the wrong scale. Just as the average is a typical value, the SD is a typical (absolute) difference from the average. It's not unlike folding the distribution over at the average and taking the average of that.
Understanding "variance" intuitively
I would focus on the standard deviation rather than the variance; the variance is on the wrong scale. Just as the average is a typical value, the SD is a typical (absolute) difference from the avera
Understanding "variance" intuitively I would focus on the standard deviation rather than the variance; the variance is on the wrong scale. Just as the average is a typical value, the SD is a typical (absolute) difference from the average. It's not unlike folding the distribution over at the average and taking the average of that.
Understanding "variance" intuitively I would focus on the standard deviation rather than the variance; the variance is on the wrong scale. Just as the average is a typical value, the SD is a typical (absolute) difference from the avera
1,834
Understanding "variance" intuitively
Have a lot of practice giving lectures about standard deviation and variance to a novice audience. Lets assume, one knows about average already. By average (or e.g. median) - one gets a single value from many measurements (that is how one usually uses them). But it is very import to say, that knowing some average is not enough at all. The second half of the knowledge is what is the error of the value. Skip the next 2 paragraphs of motivation if lazy Lets say you have some measurement device, that costed 1 000 000\$. And it gives you the answer: 42. Do you think one paid 1 000 000\$ for 42? Phooey! 1 000 000 is paid for the precision of that answer. Because Value - costs nothing without knowing its Error. You pay for the error, not the value. Here is a good live example: Commonly, we use a ruler to measure a distance. The ruler provides a precision around one millimeter (if you use metric system). What if you have to go beyond and measure something with like 0.1mm precision? - You probably would use a caliper. Now, it is easy to check, that a cheap ruler with mm scale costs cents, while reliable caliper costs ~$10. Two orders of magnitude in price for one order of magnitude in precision. And that is very usual ratio of how much one pays for smaller errors. The problem. Lets say we have a thermometer (Choose a measurement device depending on what is closer to auditory). We did N measurements of the same temperature and thermometer showed us something like 36.5, 35.9, 37.0, 36.6, ... (see the pic). But we know that the real temperature was the same all the time, and values are different because in every measurement the thermometer lies to us a bit. We can calculate the average (see red line on the picture below). Can we believe it? Even after averaging, does it have enough precision for our needs? For human health estimation for example? How can one estimate how much this little scum lies to us? Max deviation - the easiest but not the best approach. We can take the farthest point, calculate the distance between it and the average (red line) and say, that this is how thermometer lies to us, because it is maximum error we see. One could guess, this estimation is too rough. If we look at the picture, most of the points are around the average, how can we decide just by one point? Actually one can practice in naming reasons why such estimation is rough and usually bad. Variance. Then... lets take all distances and calculate an average distance from the average (on picture - average distance between each point and the red line)! BTW, how to calculate a distance? When you hear the "distance" it translates to "subtract" in math. Thus we start our formula with $ (x_{i} - \bar{x})$ where $\bar{x}$ is the average (red line) and $x_{i}$ is one of the measurements (points). Then one could imagine that the formula of average distance would be summing everything and dividing by N: $$\frac{\sum(x_{i} - \bar{x})}{N} $$ But there is a problem. We can easily see, eg. that 36.4, and 36.8 are at the same distance from 36.6. but if we put the values in the formula above, we get -0.2 and +0.2, and their sum equals 0, which is not what we want. How to get rid of the sign? At this points someone usually says "Take the absolute value of each point!". Taking an absolute value is actually a way to go, but what is the other way? We can square the values! Then the formula becomes: $$\frac{\sum(x_{i} - \bar{x})^{2}}{N} $$. This formula is called "Variance" in statistics. And it fits Much better to estimate the spread of our thermometer (or whatever) values, than taking just the maximum distance. Standard deviation. But still there is one more problem. Look at the variance formula. Squares make our measurement units... squared. If the thermometer measures the temperature in °C (or °F) then our error estimation is measured in $°C^{2}$ (or $°F^{2}$). How to neutralize the squares? - Use the square root! $$\sqrt{\frac{\sum(x_{i} - \bar{x})^{2}}{N}}$$ So here we come to the Standard Deviation formula which is commonly denoted as $\sigma$. And that is the better way to estimate our device precision. Hope it was easy to understand. From this point it should be easy go to "68–95–99.7 rule", sampling and population, standard error vs standard deviation terms Etc. P.S. @whuber pointed out a good related QA - "Why square the difference instead of taking the absolute value in standard deviation?"
Understanding "variance" intuitively
Have a lot of practice giving lectures about standard deviation and variance to a novice audience. Lets assume, one knows about average already. By average (or e.g. median) - one gets a single value
Understanding "variance" intuitively Have a lot of practice giving lectures about standard deviation and variance to a novice audience. Lets assume, one knows about average already. By average (or e.g. median) - one gets a single value from many measurements (that is how one usually uses them). But it is very import to say, that knowing some average is not enough at all. The second half of the knowledge is what is the error of the value. Skip the next 2 paragraphs of motivation if lazy Lets say you have some measurement device, that costed 1 000 000\$. And it gives you the answer: 42. Do you think one paid 1 000 000\$ for 42? Phooey! 1 000 000 is paid for the precision of that answer. Because Value - costs nothing without knowing its Error. You pay for the error, not the value. Here is a good live example: Commonly, we use a ruler to measure a distance. The ruler provides a precision around one millimeter (if you use metric system). What if you have to go beyond and measure something with like 0.1mm precision? - You probably would use a caliper. Now, it is easy to check, that a cheap ruler with mm scale costs cents, while reliable caliper costs ~$10. Two orders of magnitude in price for one order of magnitude in precision. And that is very usual ratio of how much one pays for smaller errors. The problem. Lets say we have a thermometer (Choose a measurement device depending on what is closer to auditory). We did N measurements of the same temperature and thermometer showed us something like 36.5, 35.9, 37.0, 36.6, ... (see the pic). But we know that the real temperature was the same all the time, and values are different because in every measurement the thermometer lies to us a bit. We can calculate the average (see red line on the picture below). Can we believe it? Even after averaging, does it have enough precision for our needs? For human health estimation for example? How can one estimate how much this little scum lies to us? Max deviation - the easiest but not the best approach. We can take the farthest point, calculate the distance between it and the average (red line) and say, that this is how thermometer lies to us, because it is maximum error we see. One could guess, this estimation is too rough. If we look at the picture, most of the points are around the average, how can we decide just by one point? Actually one can practice in naming reasons why such estimation is rough and usually bad. Variance. Then... lets take all distances and calculate an average distance from the average (on picture - average distance between each point and the red line)! BTW, how to calculate a distance? When you hear the "distance" it translates to "subtract" in math. Thus we start our formula with $ (x_{i} - \bar{x})$ where $\bar{x}$ is the average (red line) and $x_{i}$ is one of the measurements (points). Then one could imagine that the formula of average distance would be summing everything and dividing by N: $$\frac{\sum(x_{i} - \bar{x})}{N} $$ But there is a problem. We can easily see, eg. that 36.4, and 36.8 are at the same distance from 36.6. but if we put the values in the formula above, we get -0.2 and +0.2, and their sum equals 0, which is not what we want. How to get rid of the sign? At this points someone usually says "Take the absolute value of each point!". Taking an absolute value is actually a way to go, but what is the other way? We can square the values! Then the formula becomes: $$\frac{\sum(x_{i} - \bar{x})^{2}}{N} $$. This formula is called "Variance" in statistics. And it fits Much better to estimate the spread of our thermometer (or whatever) values, than taking just the maximum distance. Standard deviation. But still there is one more problem. Look at the variance formula. Squares make our measurement units... squared. If the thermometer measures the temperature in °C (or °F) then our error estimation is measured in $°C^{2}$ (or $°F^{2}$). How to neutralize the squares? - Use the square root! $$\sqrt{\frac{\sum(x_{i} - \bar{x})^{2}}{N}}$$ So here we come to the Standard Deviation formula which is commonly denoted as $\sigma$. And that is the better way to estimate our device precision. Hope it was easy to understand. From this point it should be easy go to "68–95–99.7 rule", sampling and population, standard error vs standard deviation terms Etc. P.S. @whuber pointed out a good related QA - "Why square the difference instead of taking the absolute value in standard deviation?"
Understanding "variance" intuitively Have a lot of practice giving lectures about standard deviation and variance to a novice audience. Lets assume, one knows about average already. By average (or e.g. median) - one gets a single value
1,835
Understanding "variance" intuitively
I was sitting down trying to puzzle out variance and the thing that finally made it click into place for me was to look at it graphically. Say you draw out a number line with four points, -7, -1, 1 and 7. Now draw an imaginary Y axis with the same four points along the Y dimension, and use the XY pairs to draw out the square for each pair of points. You wind up with four separate squares consisting of 49, 1, 1, and 49 smaller squares, each. Each of them contributes to an overall sum of squares which, itself, can be represented as a large 10 x 10 square with 100 smaller squares overall. Variance is the size of the average square contributing to that larger square. 49 + 1 + 49 + 1 = 100, 100/4 = 25. So 25 would be the variance. The standard deviation would be the length of one of the sides of that average square, or 5. Obviously this analogy does not cover the full nuance of the concept of variance. There are a lot of things that need explained, such as why we often use a denominator of n-1 to estimate the population parameter, instead of simply using n. But as a basic concept to peg the rest of a detailed understanding of variance to, simply drawing it out so I could see it helped immensely. It helps understand what we mean when we say that variance is the average squared deviation from the mean. It also helps in understanding just what relationship SD has to that average.
Understanding "variance" intuitively
I was sitting down trying to puzzle out variance and the thing that finally made it click into place for me was to look at it graphically. Say you draw out a number line with four points, -7, -1, 1 an
Understanding "variance" intuitively I was sitting down trying to puzzle out variance and the thing that finally made it click into place for me was to look at it graphically. Say you draw out a number line with four points, -7, -1, 1 and 7. Now draw an imaginary Y axis with the same four points along the Y dimension, and use the XY pairs to draw out the square for each pair of points. You wind up with four separate squares consisting of 49, 1, 1, and 49 smaller squares, each. Each of them contributes to an overall sum of squares which, itself, can be represented as a large 10 x 10 square with 100 smaller squares overall. Variance is the size of the average square contributing to that larger square. 49 + 1 + 49 + 1 = 100, 100/4 = 25. So 25 would be the variance. The standard deviation would be the length of one of the sides of that average square, or 5. Obviously this analogy does not cover the full nuance of the concept of variance. There are a lot of things that need explained, such as why we often use a denominator of n-1 to estimate the population parameter, instead of simply using n. But as a basic concept to peg the rest of a detailed understanding of variance to, simply drawing it out so I could see it helped immensely. It helps understand what we mean when we say that variance is the average squared deviation from the mean. It also helps in understanding just what relationship SD has to that average.
Understanding "variance" intuitively I was sitting down trying to puzzle out variance and the thing that finally made it click into place for me was to look at it graphically. Say you draw out a number line with four points, -7, -1, 1 an
1,836
Understanding "variance" intuitively
Imagine you ask 1000 people to correctly guess how many beans are in a jar filled with jelly beans. Now imagine that you are not necessarily interested in knowing the correct answer (which may be of some use) but you wish to get a better understanding of how people estimate the answer. Variance could be explained to a lay person as the spread of different answers (from highest to lowest). You could continue by adding that if enough people were to questioned the correct answer should lie somewhere in the middle of the spread of 'guestimates' given.
Understanding "variance" intuitively
Imagine you ask 1000 people to correctly guess how many beans are in a jar filled with jelly beans. Now imagine that you are not necessarily interested in knowing the correct answer (which may be of s
Understanding "variance" intuitively Imagine you ask 1000 people to correctly guess how many beans are in a jar filled with jelly beans. Now imagine that you are not necessarily interested in knowing the correct answer (which may be of some use) but you wish to get a better understanding of how people estimate the answer. Variance could be explained to a lay person as the spread of different answers (from highest to lowest). You could continue by adding that if enough people were to questioned the correct answer should lie somewhere in the middle of the spread of 'guestimates' given.
Understanding "variance" intuitively Imagine you ask 1000 people to correctly guess how many beans are in a jar filled with jelly beans. Now imagine that you are not necessarily interested in knowing the correct answer (which may be of s
1,837
Understanding "variance" intuitively
I think the key phrase to use when explaining both variance and standard deviation is "measure of spread". In the most basic language, the variance and standard deviation tell us how well spread out the data is. To be a little more accurate, although still addressing the layman, they tell us how well the data is spread out around the mean. In passing, note that the mean is a "measure of location". To conclude the explanation to the layman, it ought to be highlighted that the standard deviation is expressed in the same units as the data we're working with and that it is for this reason that we take the square root of the variance. i.e. the two are linked. I think that brief explanation would do the trick. It's probably somewhat similar to an introductory textbook explanation anyway.
Understanding "variance" intuitively
I think the key phrase to use when explaining both variance and standard deviation is "measure of spread". In the most basic language, the variance and standard deviation tell us how well spread out t
Understanding "variance" intuitively I think the key phrase to use when explaining both variance and standard deviation is "measure of spread". In the most basic language, the variance and standard deviation tell us how well spread out the data is. To be a little more accurate, although still addressing the layman, they tell us how well the data is spread out around the mean. In passing, note that the mean is a "measure of location". To conclude the explanation to the layman, it ought to be highlighted that the standard deviation is expressed in the same units as the data we're working with and that it is for this reason that we take the square root of the variance. i.e. the two are linked. I think that brief explanation would do the trick. It's probably somewhat similar to an introductory textbook explanation anyway.
Understanding "variance" intuitively I think the key phrase to use when explaining both variance and standard deviation is "measure of spread". In the most basic language, the variance and standard deviation tell us how well spread out t
1,838
Understanding "variance" intuitively
I'd like to provide two perspectives: I regard the variance of distribution as the moment of inertia with the axis that at the mean of the distribution and each mass as 1. This intuition would make the abstract concept concrete. The first moment is the mean of the distribution and the second moment is the variance. Precision is the reciprocal of the variance. $\phi = \frac{1}{\sigma^2}$. The larger the variance the less the precision. Reference: A first course of probability 8th edition Precision_(statistics)
Understanding "variance" intuitively
I'd like to provide two perspectives: I regard the variance of distribution as the moment of inertia with the axis that at the mean of the distribution and each mass as 1. This intuition would make t
Understanding "variance" intuitively I'd like to provide two perspectives: I regard the variance of distribution as the moment of inertia with the axis that at the mean of the distribution and each mass as 1. This intuition would make the abstract concept concrete. The first moment is the mean of the distribution and the second moment is the variance. Precision is the reciprocal of the variance. $\phi = \frac{1}{\sigma^2}$. The larger the variance the less the precision. Reference: A first course of probability 8th edition Precision_(statistics)
Understanding "variance" intuitively I'd like to provide two perspectives: I regard the variance of distribution as the moment of inertia with the axis that at the mean of the distribution and each mass as 1. This intuition would make t
1,839
Convergence in probability vs. almost sure convergence
From my point of view the difference is important, but largely for philosophical reasons. Assume you have some device, that improves with time. So, every time you use the device the probability of it failing is less than before. Convergence in probability says that the chance of failure goes to zero as the number of usages goes to infinity. So, after using the device a large number of times, you can be very confident of it working correctly, it still might fail, it's just very unlikely. Convergence almost surely is a bit stronger. It says that the total number of failures is finite. That is, if you count the number of failures as the number of usages goes to infinity, you will get a finite number. The impact of this is as follows: As you use the device more and more, you will, after some finite number of usages, exhaust all failures. From then on the device will work perfectly. As Srikant points out, you don't actually know when you have exhausted all failures, so from a purely practical point of view, there is not much difference between the two modes of convergence. However, personally I am very glad that, for example, the strong law of large numbers exists, as opposed to just the weak law. Because now, a scientific experiment to obtain, say, the speed of light, is justified in taking averages. At least in theory, after obtaining enough data, you can get arbitrarily close to the true speed of light. There wont be any failures (however improbable) in the averaging process. Let me clarify what I mean by ''failures (however improbable) in the averaging process''. Choose some $\delta > 0$ arbitrarily small. You obtain $n$ estimates $X_1,X_2,\dots,X_n$ of the speed of light (or some other quantity) that has some `true' value, say $\mu$. You compute the average $$S_n = \frac{1}{n}\sum_{k=1}^n X_k.$$ As we obtain more data ($n$ increases) we can compute $S_n$ for each $n = 1,2,\dots$. The weak law says (under some assumptions about the $X_n$) that the probability $$P(|S_n - \mu| > \delta) \rightarrow 0$$ as $n$ goes to $\infty$. The strong law says that the number of times that $|S_n - \mu|$ is larger than $\delta$ is finite (with probability 1). That is, if we define the indicator function $I(|S_n - \mu| > \delta)$ that returns one when $|S_n - \mu| > \delta$ and zero otherwise, then $$\sum_{n=1}^{\infty}I(|S_n - \mu| > \delta)$$ converges. This gives you considerable confidence in the value of $S_n$, because it guarantees (i.e. with probability 1) the existence of some finite $n_0$ such that $|S_n - \mu| < \delta$ for all $n > n_0$ (i.e. the average never fails for $n > n_0$). Note that the weak law gives no such guarantee.
Convergence in probability vs. almost sure convergence
From my point of view the difference is important, but largely for philosophical reasons. Assume you have some device, that improves with time. So, every time you use the device the probability of i
Convergence in probability vs. almost sure convergence From my point of view the difference is important, but largely for philosophical reasons. Assume you have some device, that improves with time. So, every time you use the device the probability of it failing is less than before. Convergence in probability says that the chance of failure goes to zero as the number of usages goes to infinity. So, after using the device a large number of times, you can be very confident of it working correctly, it still might fail, it's just very unlikely. Convergence almost surely is a bit stronger. It says that the total number of failures is finite. That is, if you count the number of failures as the number of usages goes to infinity, you will get a finite number. The impact of this is as follows: As you use the device more and more, you will, after some finite number of usages, exhaust all failures. From then on the device will work perfectly. As Srikant points out, you don't actually know when you have exhausted all failures, so from a purely practical point of view, there is not much difference between the two modes of convergence. However, personally I am very glad that, for example, the strong law of large numbers exists, as opposed to just the weak law. Because now, a scientific experiment to obtain, say, the speed of light, is justified in taking averages. At least in theory, after obtaining enough data, you can get arbitrarily close to the true speed of light. There wont be any failures (however improbable) in the averaging process. Let me clarify what I mean by ''failures (however improbable) in the averaging process''. Choose some $\delta > 0$ arbitrarily small. You obtain $n$ estimates $X_1,X_2,\dots,X_n$ of the speed of light (or some other quantity) that has some `true' value, say $\mu$. You compute the average $$S_n = \frac{1}{n}\sum_{k=1}^n X_k.$$ As we obtain more data ($n$ increases) we can compute $S_n$ for each $n = 1,2,\dots$. The weak law says (under some assumptions about the $X_n$) that the probability $$P(|S_n - \mu| > \delta) \rightarrow 0$$ as $n$ goes to $\infty$. The strong law says that the number of times that $|S_n - \mu|$ is larger than $\delta$ is finite (with probability 1). That is, if we define the indicator function $I(|S_n - \mu| > \delta)$ that returns one when $|S_n - \mu| > \delta$ and zero otherwise, then $$\sum_{n=1}^{\infty}I(|S_n - \mu| > \delta)$$ converges. This gives you considerable confidence in the value of $S_n$, because it guarantees (i.e. with probability 1) the existence of some finite $n_0$ such that $|S_n - \mu| < \delta$ for all $n > n_0$ (i.e. the average never fails for $n > n_0$). Note that the weak law gives no such guarantee.
Convergence in probability vs. almost sure convergence From my point of view the difference is important, but largely for philosophical reasons. Assume you have some device, that improves with time. So, every time you use the device the probability of i
1,840
Convergence in probability vs. almost sure convergence
I know this question has already been answered (and quite well, in my view), but there was a different question here which had a comment @NRH that mentioned the graphical explanation, and rather than put the pictures there it would seem more fitting to put them here. So, here goes. It's not as cool as an R package. But it's self-contained and doesn't require a subscription to JSTOR. In the following we're talking about a simple random walk, $X_{i}= \pm 1$ with equal probability, and we are calculating running averages, $$ \frac{S_{n}}{n} = \frac{1}{n}\sum_{i = 1}^{n}X_{i},\quad n=1,2,\ldots. $$ The SLLN (convergence almost surely) says that we can be 100% sure that this curve stretching off to the right will eventually, at some finite time, fall entirely within the bands forever afterward (to the right). The R code used to generate this graph is below (plot labels omitted for brevity). n <- 1000; m <- 50; e <- 0.05 s <- cumsum(2*(rbinom(n, size=1, prob=0.5) - 0.5)) plot(s/seq.int(n), type = "l", ylim = c(-0.4, 0.4)) abline(h = c(-e,e), lty = 2) The WLLN (convergence in probability) says that a large proportion of the sample paths will be in the bands on the right-hand side, at time $n$ (for the above it looks like around 48 or 9 out of 50). We can never be sure that any particular curve will be inside at any finite time, but looking at the mass of noodles above it'd be a pretty safe bet. The WLLN also says that we can make the proportion of noodles inside as close to 1 as we like by making the plot sufficiently wide. The R code for the graph follows (again, skipping labels). x <- matrix(2*(rbinom(n*m, size=1, prob=0.5) - 0.5), ncol = m) y <- apply(x, 2, function(z) cumsum(z)/seq_along(z)) matplot(y, type = "l", ylim = c(-0.4,0.4)) abline(h = c(-e,e), lty = 2, lwd = 2)
Convergence in probability vs. almost sure convergence
I know this question has already been answered (and quite well, in my view), but there was a different question here which had a comment @NRH that mentioned the graphical explanation, and rather than
Convergence in probability vs. almost sure convergence I know this question has already been answered (and quite well, in my view), but there was a different question here which had a comment @NRH that mentioned the graphical explanation, and rather than put the pictures there it would seem more fitting to put them here. So, here goes. It's not as cool as an R package. But it's self-contained and doesn't require a subscription to JSTOR. In the following we're talking about a simple random walk, $X_{i}= \pm 1$ with equal probability, and we are calculating running averages, $$ \frac{S_{n}}{n} = \frac{1}{n}\sum_{i = 1}^{n}X_{i},\quad n=1,2,\ldots. $$ The SLLN (convergence almost surely) says that we can be 100% sure that this curve stretching off to the right will eventually, at some finite time, fall entirely within the bands forever afterward (to the right). The R code used to generate this graph is below (plot labels omitted for brevity). n <- 1000; m <- 50; e <- 0.05 s <- cumsum(2*(rbinom(n, size=1, prob=0.5) - 0.5)) plot(s/seq.int(n), type = "l", ylim = c(-0.4, 0.4)) abline(h = c(-e,e), lty = 2) The WLLN (convergence in probability) says that a large proportion of the sample paths will be in the bands on the right-hand side, at time $n$ (for the above it looks like around 48 or 9 out of 50). We can never be sure that any particular curve will be inside at any finite time, but looking at the mass of noodles above it'd be a pretty safe bet. The WLLN also says that we can make the proportion of noodles inside as close to 1 as we like by making the plot sufficiently wide. The R code for the graph follows (again, skipping labels). x <- matrix(2*(rbinom(n*m, size=1, prob=0.5) - 0.5), ncol = m) y <- apply(x, 2, function(z) cumsum(z)/seq_along(z)) matplot(y, type = "l", ylim = c(-0.4,0.4)) abline(h = c(-e,e), lty = 2, lwd = 2)
Convergence in probability vs. almost sure convergence I know this question has already been answered (and quite well, in my view), but there was a different question here which had a comment @NRH that mentioned the graphical explanation, and rather than
1,841
Convergence in probability vs. almost sure convergence
I understand it as follows, Convergence in probability The probability that the sequence of random variables equals the target value is asymptotically decreasing and approaches 0 but never actually attains 0. Almost Sure Convergence The sequence of random variables will equal the target value asymptotically but you cannot predict at what point it will happen. Almost sure convergence is a stronger condition on the behavior of a sequence of random variables because it states that "something will definitely happen" (we just don't know when). In contrast, convergence in probability states that "while something is likely to happen" the likelihood of "something not happening" decreases asymptotically but never actually reaches 0. (something $\equiv$ a sequence of random variables converging to a particular value). The wiki has some examples of both which should help clarify the above (in particular see the example of the archer in the context of convergence in prob and the example of the charity in the context of almost sure convergence). From a practical standpoint, convergence in probability is enough as we do not particularly care about very unlikely events. As an example, consistency of an estimator is essentially convergence in probability. Thus, when using a consistent estimate, we implicitly acknowledge the fact that in large samples there is a very small probability that our estimate is far from the true value. We live with this 'defect' of convergence in probability as we know that asymptotically the probability of the estimator being far from the truth is vanishingly small.
Convergence in probability vs. almost sure convergence
I understand it as follows, Convergence in probability The probability that the sequence of random variables equals the target value is asymptotically decreasing and approaches 0 but never actually at
Convergence in probability vs. almost sure convergence I understand it as follows, Convergence in probability The probability that the sequence of random variables equals the target value is asymptotically decreasing and approaches 0 but never actually attains 0. Almost Sure Convergence The sequence of random variables will equal the target value asymptotically but you cannot predict at what point it will happen. Almost sure convergence is a stronger condition on the behavior of a sequence of random variables because it states that "something will definitely happen" (we just don't know when). In contrast, convergence in probability states that "while something is likely to happen" the likelihood of "something not happening" decreases asymptotically but never actually reaches 0. (something $\equiv$ a sequence of random variables converging to a particular value). The wiki has some examples of both which should help clarify the above (in particular see the example of the archer in the context of convergence in prob and the example of the charity in the context of almost sure convergence). From a practical standpoint, convergence in probability is enough as we do not particularly care about very unlikely events. As an example, consistency of an estimator is essentially convergence in probability. Thus, when using a consistent estimate, we implicitly acknowledge the fact that in large samples there is a very small probability that our estimate is far from the true value. We live with this 'defect' of convergence in probability as we know that asymptotically the probability of the estimator being far from the truth is vanishingly small.
Convergence in probability vs. almost sure convergence I understand it as follows, Convergence in probability The probability that the sequence of random variables equals the target value is asymptotically decreasing and approaches 0 but never actually at
1,842
Convergence in probability vs. almost sure convergence
If you enjoy visual explanations, there was a nice 'Teacher's Corner' article on this subject in the American Statistician (cite below). As a bonus, the authors included an R package to facilitate learning. @article{lafaye09, title={Understanding Convergence Concepts: A Visual-Minded and Graphical Simulation-Based Approach}, author={Lafaye de Micheaux, P. and Liquet, B.}, journal={The American Statistician}, volume={63}, number={2}, pages={173--178}, year={2009}, publisher={ASA} }
Convergence in probability vs. almost sure convergence
If you enjoy visual explanations, there was a nice 'Teacher's Corner' article on this subject in the American Statistician (cite below). As a bonus, the authors included an R package to facilitate le
Convergence in probability vs. almost sure convergence If you enjoy visual explanations, there was a nice 'Teacher's Corner' article on this subject in the American Statistician (cite below). As a bonus, the authors included an R package to facilitate learning. @article{lafaye09, title={Understanding Convergence Concepts: A Visual-Minded and Graphical Simulation-Based Approach}, author={Lafaye de Micheaux, P. and Liquet, B.}, journal={The American Statistician}, volume={63}, number={2}, pages={173--178}, year={2009}, publisher={ASA} }
Convergence in probability vs. almost sure convergence If you enjoy visual explanations, there was a nice 'Teacher's Corner' article on this subject in the American Statistician (cite below). As a bonus, the authors included an R package to facilitate le
1,843
Convergence in probability vs. almost sure convergence
One thing that helped me to grasp the difference is the following equivalence $$P\left({\lim_{n\to\infty}|X_n-X|=0}\right) = 1 \iff \lim_{n\to\infty} P\left({\sup_{m\geq n}|X_m-X|>\epsilon }\right) = 0 ~\forall \epsilon > 0.$$ In comparison stochastic convergence: $$\lim_{n\to\infty}P(|X_n-X|>\epsilon) = 0 ~\forall \epsilon >0.$$ When comparing the right side of the upper equivalence with the stochastic convergence, the difference becomes clearer, I think.
Convergence in probability vs. almost sure convergence
One thing that helped me to grasp the difference is the following equivalence $$P\left({\lim_{n\to\infty}|X_n-X|=0}\right) = 1 \iff \lim_{n\to\infty} P\left({\sup_{m\geq n}|X_m-X|>\epsilon }\right) =
Convergence in probability vs. almost sure convergence One thing that helped me to grasp the difference is the following equivalence $$P\left({\lim_{n\to\infty}|X_n-X|=0}\right) = 1 \iff \lim_{n\to\infty} P\left({\sup_{m\geq n}|X_m-X|>\epsilon }\right) = 0 ~\forall \epsilon > 0.$$ In comparison stochastic convergence: $$\lim_{n\to\infty}P(|X_n-X|>\epsilon) = 0 ~\forall \epsilon >0.$$ When comparing the right side of the upper equivalence with the stochastic convergence, the difference becomes clearer, I think.
Convergence in probability vs. almost sure convergence One thing that helped me to grasp the difference is the following equivalence $$P\left({\lim_{n\to\infty}|X_n-X|=0}\right) = 1 \iff \lim_{n\to\infty} P\left({\sup_{m\geq n}|X_m-X|>\epsilon }\right) =
1,844
Convergence in probability vs. almost sure convergence
This last guy explains it very well. If you take a sequence of random variables Xn= 1 with probability 1/n and zero otherwise. It is easy to see taking limits that this converges to zero in probability, but fails to converge almost surely. As he said, probability doesn't care that we might get a one down the road. Almost surely does. Almost surely implies convergence in probability, but not the other way around yah?
Convergence in probability vs. almost sure convergence
This last guy explains it very well. If you take a sequence of random variables Xn= 1 with probability 1/n and zero otherwise. It is easy to see taking limits that this converges to zero in probabi
Convergence in probability vs. almost sure convergence This last guy explains it very well. If you take a sequence of random variables Xn= 1 with probability 1/n and zero otherwise. It is easy to see taking limits that this converges to zero in probability, but fails to converge almost surely. As he said, probability doesn't care that we might get a one down the road. Almost surely does. Almost surely implies convergence in probability, but not the other way around yah?
Convergence in probability vs. almost sure convergence This last guy explains it very well. If you take a sequence of random variables Xn= 1 with probability 1/n and zero otherwise. It is easy to see taking limits that this converges to zero in probabi
1,845
What, precisely, is a confidence interval?
I found this thought experiment helpful when thinking about confidence intervals. It also answers your question 3. Let $X\sim U(0,1)$ and $Y=X+a-\frac{1}{2}$. Consider two observations of $Y$ taking the values $y_1$ and $y_2$ corresponding to observations $x_1$ and $x_2$ of $X$, and let $y_l=\min(y_1,y_2)$ and $y_u=\max(y_1,y_2)$. Then $[y_l,y_u]$ is a 50% confidence interval for $a$ (since the interval includes $a$ if $x_1<\frac12<x_2$ or $x_1>\frac12>x_2$, each of which has probability $\frac14$). However, if $y_u-y_l>\frac12$ then we know that the probability that the interval contains $a$ is $1$, not $\frac12$. The subtlety is that a $z\%$ confidence interval for a parameter means that the endpoints of the interval (which are random variables) lie either side of the parameter with probability $z\%$ before you calculate the interval, not that the probability of the parameter lying within the interval is $z\%$ after you have calculated the interval.
What, precisely, is a confidence interval?
I found this thought experiment helpful when thinking about confidence intervals. It also answers your question 3. Let $X\sim U(0,1)$ and $Y=X+a-\frac{1}{2}$. Consider two observations of $Y$ taking t
What, precisely, is a confidence interval? I found this thought experiment helpful when thinking about confidence intervals. It also answers your question 3. Let $X\sim U(0,1)$ and $Y=X+a-\frac{1}{2}$. Consider two observations of $Y$ taking the values $y_1$ and $y_2$ corresponding to observations $x_1$ and $x_2$ of $X$, and let $y_l=\min(y_1,y_2)$ and $y_u=\max(y_1,y_2)$. Then $[y_l,y_u]$ is a 50% confidence interval for $a$ (since the interval includes $a$ if $x_1<\frac12<x_2$ or $x_1>\frac12>x_2$, each of which has probability $\frac14$). However, if $y_u-y_l>\frac12$ then we know that the probability that the interval contains $a$ is $1$, not $\frac12$. The subtlety is that a $z\%$ confidence interval for a parameter means that the endpoints of the interval (which are random variables) lie either side of the parameter with probability $z\%$ before you calculate the interval, not that the probability of the parameter lying within the interval is $z\%$ after you have calculated the interval.
What, precisely, is a confidence interval? I found this thought experiment helpful when thinking about confidence intervals. It also answers your question 3. Let $X\sim U(0,1)$ and $Y=X+a-\frac{1}{2}$. Consider two observations of $Y$ taking t
1,846
What, precisely, is a confidence interval?
There are many issues concerning confidence intervals, but let's focus on the quotations. The problem lies in possible misinterpretations rather than being a matter of correctness. When people say a "parameter has a particular probability of" something, they are thinking of the parameter as being a random variable. This is not the point of view of a (classical) confidence interval procedure, for which the random variable is the interval itself and the parameter is determined, not random, yet unknown. This is why such statements are frequently attacked. Mathematically, if we let $t$ be any procedure that maps data $\mathbf{x} = (x_i)$ to subsets of the parameter space and if (no matter what the value of the parameter $\theta$ may be) the assertion $\theta \in t(\mathbf{x})$ defines an event $A(\mathbf{x})$, then--by definition--it has a probability $\Pr_{\theta}\left( A(\mathbf{x}) \right)$ for any possible value of $\theta$. When $t$ is a confidence interval procedure with confidence $1-\alpha$ then this probability is supposed to have an infimum (over all parameter values) of $1-\alpha$. (Subject to this criterion, we usually select procedures that optimize some additional property, such as producing short confidence intervals or symmetric ones, but that's a separate matter.) The Weak Law of Large Numbers then justifies the second quotation. That, however, is not a definition of confidence intervals: it is merely a property they have. I think this analysis has answered question 1, shows that the premise of question 2 is incorrect, and makes question 3 moot.
What, precisely, is a confidence interval?
There are many issues concerning confidence intervals, but let's focus on the quotations. The problem lies in possible misinterpretations rather than being a matter of correctness. When people say a
What, precisely, is a confidence interval? There are many issues concerning confidence intervals, but let's focus on the quotations. The problem lies in possible misinterpretations rather than being a matter of correctness. When people say a "parameter has a particular probability of" something, they are thinking of the parameter as being a random variable. This is not the point of view of a (classical) confidence interval procedure, for which the random variable is the interval itself and the parameter is determined, not random, yet unknown. This is why such statements are frequently attacked. Mathematically, if we let $t$ be any procedure that maps data $\mathbf{x} = (x_i)$ to subsets of the parameter space and if (no matter what the value of the parameter $\theta$ may be) the assertion $\theta \in t(\mathbf{x})$ defines an event $A(\mathbf{x})$, then--by definition--it has a probability $\Pr_{\theta}\left( A(\mathbf{x}) \right)$ for any possible value of $\theta$. When $t$ is a confidence interval procedure with confidence $1-\alpha$ then this probability is supposed to have an infimum (over all parameter values) of $1-\alpha$. (Subject to this criterion, we usually select procedures that optimize some additional property, such as producing short confidence intervals or symmetric ones, but that's a separate matter.) The Weak Law of Large Numbers then justifies the second quotation. That, however, is not a definition of confidence intervals: it is merely a property they have. I think this analysis has answered question 1, shows that the premise of question 2 is incorrect, and makes question 3 moot.
What, precisely, is a confidence interval? There are many issues concerning confidence intervals, but let's focus on the quotations. The problem lies in possible misinterpretations rather than being a matter of correctness. When people say a
1,847
What, precisely, is a confidence interval?
I wouldn't call the definition of CIs as wrong, but they are easy to mis-interpret, due to there being more than one definition of probability. CIs are based on the following definition of Probability (Frequentist or ontological) (1)probability of a proposition=long run proportion of times that proposition is observed to be true, conditional on the data generating process Thus, in order to be conceptually valid in using a CI, you must accept this definition of probability. If you don't, then your interval is not a CI, from a theoretical point of view. This is why the definition used the word proportion and NOT the word probability, to make it clear that the "long run frequency" definition of probability is being used. The main alternative definition of Probability (Epistemological or probability as an extension of deductive Logic or Bayesian) is (2)probability of a proposition = rational degree of belief that the proposition is true, conditional on a state of knowledge People often intuitively get both of these definitions mixed up, and use whichever interpretation happens to appeal to their intuition. This can get you into all kinds of confusing situations (especially when you move from one paradigm to the other). That the two approaches often lead to the same result, means that in some cases we have: rational degree of belief that the proposition is true, conditional on a state of knowledge = long run proportion of times that proposition is observed to be true, conditional on the data generating process The point is that it does not hold universally, so we cannot expect the two different definitions to always lead to the same results. So, unless you actually work out the Bayesian solution, and then find it to be the same interval, you cannot give the interval given by the CI the interpretation as a probability of containing the true value. And if you do, then the interval is not a Confidence Interval, but a Credible Interval.
What, precisely, is a confidence interval?
I wouldn't call the definition of CIs as wrong, but they are easy to mis-interpret, due to there being more than one definition of probability. CIs are based on the following definition of Probabilit
What, precisely, is a confidence interval? I wouldn't call the definition of CIs as wrong, but they are easy to mis-interpret, due to there being more than one definition of probability. CIs are based on the following definition of Probability (Frequentist or ontological) (1)probability of a proposition=long run proportion of times that proposition is observed to be true, conditional on the data generating process Thus, in order to be conceptually valid in using a CI, you must accept this definition of probability. If you don't, then your interval is not a CI, from a theoretical point of view. This is why the definition used the word proportion and NOT the word probability, to make it clear that the "long run frequency" definition of probability is being used. The main alternative definition of Probability (Epistemological or probability as an extension of deductive Logic or Bayesian) is (2)probability of a proposition = rational degree of belief that the proposition is true, conditional on a state of knowledge People often intuitively get both of these definitions mixed up, and use whichever interpretation happens to appeal to their intuition. This can get you into all kinds of confusing situations (especially when you move from one paradigm to the other). That the two approaches often lead to the same result, means that in some cases we have: rational degree of belief that the proposition is true, conditional on a state of knowledge = long run proportion of times that proposition is observed to be true, conditional on the data generating process The point is that it does not hold universally, so we cannot expect the two different definitions to always lead to the same results. So, unless you actually work out the Bayesian solution, and then find it to be the same interval, you cannot give the interval given by the CI the interpretation as a probability of containing the true value. And if you do, then the interval is not a Confidence Interval, but a Credible Interval.
What, precisely, is a confidence interval? I wouldn't call the definition of CIs as wrong, but they are easy to mis-interpret, due to there being more than one definition of probability. CIs are based on the following definition of Probabilit
1,848
What, precisely, is a confidence interval?
R.A. Fisher had a criterion for the usefulness of confidence intervals: A CI should not admit of "identifiable subsets" that imply a different confidence level. In most (if not all) counterexamples, we have cases where there are identifiable subsets that have different coverage probabilities. In theses cases, you can either use Bayesian cred-intervals to specify a subjective sense of where the parameter is, or you can formulate a likelihood interval to reflect the relative uncertainty in the parameter, given the data. For example, one case that seems relatively contradiction-free is the 2-sided normal confidence interval for the population mean. Assuming sampling from a normal population with given std., the 95% CI admits of no identifiable subsets that would provide more information about the parameter. This can be seen by the fact that the sample mean is a sufficient statistic in the likelihood function - i.e., the likelihood function is independent of the individual sample values once we know the sample mean. The reason we have any subjective confidence in the 95% symmetric CI for the normal mean stems less from the stated coverage probability and more from the fact that the symmetric 95% CI for the normal mean is the "highest likelihood" interval, i.e., all parameter values within the interval have a higher likelihood than any parameter value outside the interval. However, since likelihood is not a probability (in the long-run accuracy sense), it is more of a subjective criterion (as is the Bayesian use of prior and likelihood). In sum, there are infinitely many intervals for the normal mean that have 95% coverage probability, but only the symmetric CI has the intuitive plausbiltiy that we expect from an interval estimate. Therefore, R.A. Fisher's criterion implies that coverage probability should equate with subjective confidence only if it admits of none of these identifiable subsets. If subsets are present, then the coverage probabilty will be conditional on the true values of the parameter(s) describing the subset. To get an interval with the intuitive level of confidence, you would need to condition the interval estiamte on the appropriate ancillary statistics that help identify the subset. OR, you could resort to dispersion/mixture models, which naturally leads to interpreting the parameters as random variables (aka Bayesian statistics) or you can calculate the profile/conditional/marginal likelihoods under the likelihood framework. Either way, you've abandoned any hope of coming up with an objectively verifiable probabilty of being correct, only a subjective "ordering of preferences." Hope this helps.
What, precisely, is a confidence interval?
R.A. Fisher had a criterion for the usefulness of confidence intervals: A CI should not admit of "identifiable subsets" that imply a different confidence level. In most (if not all) counterexamples, w
What, precisely, is a confidence interval? R.A. Fisher had a criterion for the usefulness of confidence intervals: A CI should not admit of "identifiable subsets" that imply a different confidence level. In most (if not all) counterexamples, we have cases where there are identifiable subsets that have different coverage probabilities. In theses cases, you can either use Bayesian cred-intervals to specify a subjective sense of where the parameter is, or you can formulate a likelihood interval to reflect the relative uncertainty in the parameter, given the data. For example, one case that seems relatively contradiction-free is the 2-sided normal confidence interval for the population mean. Assuming sampling from a normal population with given std., the 95% CI admits of no identifiable subsets that would provide more information about the parameter. This can be seen by the fact that the sample mean is a sufficient statistic in the likelihood function - i.e., the likelihood function is independent of the individual sample values once we know the sample mean. The reason we have any subjective confidence in the 95% symmetric CI for the normal mean stems less from the stated coverage probability and more from the fact that the symmetric 95% CI for the normal mean is the "highest likelihood" interval, i.e., all parameter values within the interval have a higher likelihood than any parameter value outside the interval. However, since likelihood is not a probability (in the long-run accuracy sense), it is more of a subjective criterion (as is the Bayesian use of prior and likelihood). In sum, there are infinitely many intervals for the normal mean that have 95% coverage probability, but only the symmetric CI has the intuitive plausbiltiy that we expect from an interval estimate. Therefore, R.A. Fisher's criterion implies that coverage probability should equate with subjective confidence only if it admits of none of these identifiable subsets. If subsets are present, then the coverage probabilty will be conditional on the true values of the parameter(s) describing the subset. To get an interval with the intuitive level of confidence, you would need to condition the interval estiamte on the appropriate ancillary statistics that help identify the subset. OR, you could resort to dispersion/mixture models, which naturally leads to interpreting the parameters as random variables (aka Bayesian statistics) or you can calculate the profile/conditional/marginal likelihoods under the likelihood framework. Either way, you've abandoned any hope of coming up with an objectively verifiable probabilty of being correct, only a subjective "ordering of preferences." Hope this helps.
What, precisely, is a confidence interval? R.A. Fisher had a criterion for the usefulness of confidence intervals: A CI should not admit of "identifiable subsets" that imply a different confidence level. In most (if not all) counterexamples, w
1,849
What, precisely, is a confidence interval?
This is thing that may be hard to understand: if on average 95% of all confidence intervals will contain the parameter and I have one specific confidence interval why isn't the the probability that this interval contains the parameter also 95% ? A confidence interval relates to the sampling procedure. If you would take many samples and calculate a 95% confidence interval for each sample, you'd find that 95% of those intervals contain the population mean. This is useful to for instance industrial quality departments. Those guys take many samples, and now they have the confidence that most of their estimates will be pretty close to the reality. They know that 95% of their estimates are pretty good, but they can't say that about each and every specific estimate. Compare this to rolling dice: if you would roll 600 (fair) dice, how many 6 would you throw? Your best guess is $\frac{1}{6}$ * 600 = 100. However, if you have thrown ONE die, it is useless to say: "There is a 1/6 or 16.6% probability that I have now thrown a 6". Why? Because the die shows either a 6, or some other figure. You have thrown a 6, or not. So the probability is 1, or 0. The probability cannot be $\frac{1}{6}$. When asked before the throw what the probability of throwing a 6 with ONE die would be, a Bayesian would answer "$\frac{1}{6}$" (based on prior information: everybody knows that a die has 6 sides and an equal chance of falling on either of them), but a Frequentist would say "No idea" because frequentism is solely based on the data, not on priors or any outside information. Likewise, if you have only 1 sample (thus 1 confidence interval), you have no way to say how likely it is that the population mean is in that interval. The mean (or any parameter) is either in it, or not. The probability is either 1, or 0. Also, it is not correct that values within the Confidence Interval are more likely than those outside of that. I made a small illustration; everything is measured in °C. Remember, water freezes at 0 °C and boils at 100 °C. The case: in a cold lake, we'd like to estimate the temperature of the water that flows below the ice. We measure the temperature in 100 locations. Here are my data: 0.1 °C (measured in 49 locations); 0.2 °C (also in 49 locations); 0 °C (in 1 location. This was water just about to freeze); 95 °C (in one location, there is a factory that illegally dumps very hot water in the lake). Mean temperature: 1.1 °C; Standard deviation: 1.5 °C; 95%-CI: ( -0.8 °C...... + 3.0 °C). The temperatures within in this confidence interval are definitely NOT more likely than those outside of it. The average temperature of the flowing water in this lake CANNOT be colder than 0°C, otherwise it would not be water but ice. A part of this confidence interval (namely, the section from -0.8 to 0) actually has a 0% probability of containing the true parameter. In conclusion: confidence intervals are a frequentist concept, and therefore are based on the idea of repeated samples. If many researchers would take samples from this lake, and if all those researchers would calculate confidence intervals, then 95% of those intervals will contain the true parameter. But for one single confidence interval it is impossible to say how likely it is that it contains the true parameter.
What, precisely, is a confidence interval?
This is thing that may be hard to understand: if on average 95% of all confidence intervals will contain the parameter and I have one specific confidence interval why isn't the the probability that t
What, precisely, is a confidence interval? This is thing that may be hard to understand: if on average 95% of all confidence intervals will contain the parameter and I have one specific confidence interval why isn't the the probability that this interval contains the parameter also 95% ? A confidence interval relates to the sampling procedure. If you would take many samples and calculate a 95% confidence interval for each sample, you'd find that 95% of those intervals contain the population mean. This is useful to for instance industrial quality departments. Those guys take many samples, and now they have the confidence that most of their estimates will be pretty close to the reality. They know that 95% of their estimates are pretty good, but they can't say that about each and every specific estimate. Compare this to rolling dice: if you would roll 600 (fair) dice, how many 6 would you throw? Your best guess is $\frac{1}{6}$ * 600 = 100. However, if you have thrown ONE die, it is useless to say: "There is a 1/6 or 16.6% probability that I have now thrown a 6". Why? Because the die shows either a 6, or some other figure. You have thrown a 6, or not. So the probability is 1, or 0. The probability cannot be $\frac{1}{6}$. When asked before the throw what the probability of throwing a 6 with ONE die would be, a Bayesian would answer "$\frac{1}{6}$" (based on prior information: everybody knows that a die has 6 sides and an equal chance of falling on either of them), but a Frequentist would say "No idea" because frequentism is solely based on the data, not on priors or any outside information. Likewise, if you have only 1 sample (thus 1 confidence interval), you have no way to say how likely it is that the population mean is in that interval. The mean (or any parameter) is either in it, or not. The probability is either 1, or 0. Also, it is not correct that values within the Confidence Interval are more likely than those outside of that. I made a small illustration; everything is measured in °C. Remember, water freezes at 0 °C and boils at 100 °C. The case: in a cold lake, we'd like to estimate the temperature of the water that flows below the ice. We measure the temperature in 100 locations. Here are my data: 0.1 °C (measured in 49 locations); 0.2 °C (also in 49 locations); 0 °C (in 1 location. This was water just about to freeze); 95 °C (in one location, there is a factory that illegally dumps very hot water in the lake). Mean temperature: 1.1 °C; Standard deviation: 1.5 °C; 95%-CI: ( -0.8 °C...... + 3.0 °C). The temperatures within in this confidence interval are definitely NOT more likely than those outside of it. The average temperature of the flowing water in this lake CANNOT be colder than 0°C, otherwise it would not be water but ice. A part of this confidence interval (namely, the section from -0.8 to 0) actually has a 0% probability of containing the true parameter. In conclusion: confidence intervals are a frequentist concept, and therefore are based on the idea of repeated samples. If many researchers would take samples from this lake, and if all those researchers would calculate confidence intervals, then 95% of those intervals will contain the true parameter. But for one single confidence interval it is impossible to say how likely it is that it contains the true parameter.
What, precisely, is a confidence interval? This is thing that may be hard to understand: if on average 95% of all confidence intervals will contain the parameter and I have one specific confidence interval why isn't the the probability that t
1,850
What, precisely, is a confidence interval?
From a theoretical perspective Questions 2 and 3 are based on the incorrect assumption that the definitions are wrong. So I am in agreement with @whuber's answer in that respect, and @whuber's answer to question 1 does not require any additional input from me. However, from a more practical perspective a confidence interval can be given its intuitive definition (Probability of containing the true value) when it is numerically identical with a Bayesian credible interval based on the same information (i.e. a non-informative prior). But this is somewhat disheartening for the die hard anti-bayesian, because in order to verify the conditions to give his CI the interpretation he/she want to give it, they must work out the Bayesian solution, for which the intuitive interpretation automatically holds! The easiest example is a $1-\alpha$ confidence interval for the normal mean with a known variance $\overline{x}\pm \sigma Z_{\alpha/2} $, and a $1-\alpha$ posterior credible interval $\overline{x}\pm \sigma Z_{\alpha/2} $. I am not exactly sure of the conditions, but I know the following are important for the intuitive interpretation of CIs to hold: 1) a Pivot statistic exists, whose distribution is independent of the parameters (do exact pivots exist outside normal and chi-square distributions?) 2) there are no nuisance parameters, (except in the case of a Pivotal statistic, which is one of the few exact ways one has to handle nuisance parameters when making CIs) 3) a sufficient statistic exists for the parameter of interest, and the confidence interval uses the sufficient statistic 4) the sampling distribution of the sufficient statistic and the posterior distribution have some kind of symmetry between the sufficient statistic and the parameter. In the normal case the sampling distribution the symmetry is in $(\overline{x}|\mu,\sigma)\sim N(\mu,\frac{\sigma}{\sqrt{n}})$ while $(\mu|\overline{x},\sigma)\sim N(\overline{x},\frac{\sigma}{\sqrt{n}})$. These conditions are usually difficult to find, and usually it is quicker to work out the Bayesian interval, and compare it. An interesting exercise may also be to try and answer the question "for what prior is my CI also a Credible Interval?" You may discover some hidden assumptions about your CI procedure by looking at this prior.
What, precisely, is a confidence interval?
From a theoretical perspective Questions 2 and 3 are based on the incorrect assumption that the definitions are wrong. So I am in agreement with @whuber's answer in that respect, and @whuber's answer
What, precisely, is a confidence interval? From a theoretical perspective Questions 2 and 3 are based on the incorrect assumption that the definitions are wrong. So I am in agreement with @whuber's answer in that respect, and @whuber's answer to question 1 does not require any additional input from me. However, from a more practical perspective a confidence interval can be given its intuitive definition (Probability of containing the true value) when it is numerically identical with a Bayesian credible interval based on the same information (i.e. a non-informative prior). But this is somewhat disheartening for the die hard anti-bayesian, because in order to verify the conditions to give his CI the interpretation he/she want to give it, they must work out the Bayesian solution, for which the intuitive interpretation automatically holds! The easiest example is a $1-\alpha$ confidence interval for the normal mean with a known variance $\overline{x}\pm \sigma Z_{\alpha/2} $, and a $1-\alpha$ posterior credible interval $\overline{x}\pm \sigma Z_{\alpha/2} $. I am not exactly sure of the conditions, but I know the following are important for the intuitive interpretation of CIs to hold: 1) a Pivot statistic exists, whose distribution is independent of the parameters (do exact pivots exist outside normal and chi-square distributions?) 2) there are no nuisance parameters, (except in the case of a Pivotal statistic, which is one of the few exact ways one has to handle nuisance parameters when making CIs) 3) a sufficient statistic exists for the parameter of interest, and the confidence interval uses the sufficient statistic 4) the sampling distribution of the sufficient statistic and the posterior distribution have some kind of symmetry between the sufficient statistic and the parameter. In the normal case the sampling distribution the symmetry is in $(\overline{x}|\mu,\sigma)\sim N(\mu,\frac{\sigma}{\sqrt{n}})$ while $(\mu|\overline{x},\sigma)\sim N(\overline{x},\frac{\sigma}{\sqrt{n}})$. These conditions are usually difficult to find, and usually it is quicker to work out the Bayesian interval, and compare it. An interesting exercise may also be to try and answer the question "for what prior is my CI also a Credible Interval?" You may discover some hidden assumptions about your CI procedure by looking at this prior.
What, precisely, is a confidence interval? From a theoretical perspective Questions 2 and 3 are based on the incorrect assumption that the definitions are wrong. So I am in agreement with @whuber's answer in that respect, and @whuber's answer
1,851
What, precisely, is a confidence interval?
Suppose we are in a simple situation. You have an unknown parameter $\theta$ and $T$ an estimator of $\theta$ that has an imprecision around 1 (informally). You think (informally) $\theta$ should be in $[T-1;T+1]$ most often. In a real experiment you observe $T=12$. It is natural to ask the question "Given what I see ($T=12$), what is the probability $\theta\in[11;13]$ ?". Mathematically : $P(\theta\in[11;13]|T=12)$. Everybody naturally asks this question. The confidence interval theory should logically answer to this question. But it doesn't. Bayesian statistics do answer to that question. In Bayesian statistic, you can really calculate $P(\theta\in[11;13]|T=12)$. But you need to assume a prior that is a distribution for $\theta$ before doing the experiment and observing $T$. For example : Assume $\theta$ has a prior distribution uniform on $[0;30]$ do this experiment, find $T=12$ Apply Bayes formula : $P(\theta\in[11;13]|T=12)=0.94$ But in frequentist statistics, there is no prior and thus anything like $P(\theta\in...|T \in...)$ does not exist. Instead statisticians say something like this : "Whatever $\theta$ is, the probability that $\theta\in [T-1;T+1]$ is $0.95$". Mathematically : $\forall\theta, P(\theta\in[T-1;T+1]|\theta)=0.95$" So : Bayesian : $P(\theta\in[T-1;T+1]|T)=0.94$ for $T=12$ Frequentist : $\forall\theta, P(\theta\in[T-1;T+1]|\theta)=0.95$ The Bayesian statement is more natural. Most often, the frequentist statement is misinterpreted spontaneously as the Bayesian statement (by any normal human brain who hasn't practised statistics for years). And honestly, many statistics book do not make that point very clear. And practically ? In many usual situations the fact is that probabilities obtained by frequentist and Bayesian approaches are very close. So that confusing the frequentist statement for the Bayesian one has little consequences. But "philosophically" it's very different.
What, precisely, is a confidence interval?
Suppose we are in a simple situation. You have an unknown parameter $\theta$ and $T$ an estimator of $\theta$ that has an imprecision around 1 (informally). You think (informally) $\theta$ should be i
What, precisely, is a confidence interval? Suppose we are in a simple situation. You have an unknown parameter $\theta$ and $T$ an estimator of $\theta$ that has an imprecision around 1 (informally). You think (informally) $\theta$ should be in $[T-1;T+1]$ most often. In a real experiment you observe $T=12$. It is natural to ask the question "Given what I see ($T=12$), what is the probability $\theta\in[11;13]$ ?". Mathematically : $P(\theta\in[11;13]|T=12)$. Everybody naturally asks this question. The confidence interval theory should logically answer to this question. But it doesn't. Bayesian statistics do answer to that question. In Bayesian statistic, you can really calculate $P(\theta\in[11;13]|T=12)$. But you need to assume a prior that is a distribution for $\theta$ before doing the experiment and observing $T$. For example : Assume $\theta$ has a prior distribution uniform on $[0;30]$ do this experiment, find $T=12$ Apply Bayes formula : $P(\theta\in[11;13]|T=12)=0.94$ But in frequentist statistics, there is no prior and thus anything like $P(\theta\in...|T \in...)$ does not exist. Instead statisticians say something like this : "Whatever $\theta$ is, the probability that $\theta\in [T-1;T+1]$ is $0.95$". Mathematically : $\forall\theta, P(\theta\in[T-1;T+1]|\theta)=0.95$" So : Bayesian : $P(\theta\in[T-1;T+1]|T)=0.94$ for $T=12$ Frequentist : $\forall\theta, P(\theta\in[T-1;T+1]|\theta)=0.95$ The Bayesian statement is more natural. Most often, the frequentist statement is misinterpreted spontaneously as the Bayesian statement (by any normal human brain who hasn't practised statistics for years). And honestly, many statistics book do not make that point very clear. And practically ? In many usual situations the fact is that probabilities obtained by frequentist and Bayesian approaches are very close. So that confusing the frequentist statement for the Bayesian one has little consequences. But "philosophically" it's very different.
What, precisely, is a confidence interval? Suppose we are in a simple situation. You have an unknown parameter $\theta$ and $T$ an estimator of $\theta$ that has an imprecision around 1 (informally). You think (informally) $\theta$ should be i
1,852
What, precisely, is a confidence interval?
Here's the most degenerate example. If I want to build a 95% confidence interval $I$ for a real-valued parameter $\mu$, I can use the following distribution: $$ \mathbb P(I = (-\infty, +\infty)) = 0.95 \\ \mathbb P(I = \emptyset) = 0.05 $$ (Some definitions of confidence interval may not technically include infinite or empty intervals, but this doesn't affect the example). 95% of confidence intervals drawn from this distribution will contain $\mu$. But if I show you any particular confidence interval drawn from this distribution, your probability that it contains $\mu$ shouldn't be 0.95. It should be 1 for $(-\infty, +\infty)$ and 0 for $\emptyset$.
What, precisely, is a confidence interval?
Here's the most degenerate example. If I want to build a 95% confidence interval $I$ for a real-valued parameter $\mu$, I can use the following distribution: $$ \mathbb P(I = (-\infty, +\infty)) = 0.9
What, precisely, is a confidence interval? Here's the most degenerate example. If I want to build a 95% confidence interval $I$ for a real-valued parameter $\mu$, I can use the following distribution: $$ \mathbb P(I = (-\infty, +\infty)) = 0.95 \\ \mathbb P(I = \emptyset) = 0.05 $$ (Some definitions of confidence interval may not technically include infinite or empty intervals, but this doesn't affect the example). 95% of confidence intervals drawn from this distribution will contain $\mu$. But if I show you any particular confidence interval drawn from this distribution, your probability that it contains $\mu$ shouldn't be 0.95. It should be 1 for $(-\infty, +\infty)$ and 0 for $\emptyset$.
What, precisely, is a confidence interval? Here's the most degenerate example. If I want to build a 95% confidence interval $I$ for a real-valued parameter $\mu$, I can use the following distribution: $$ \mathbb P(I = (-\infty, +\infty)) = 0.9
1,853
What, precisely, is a confidence interval?
Okay, I realize that when you calculate a 95% confidence interval for a parameter using classical frequentist methods, it doesn't mean that there is a 95% probability that the parameter lies within that interval. And yet ... when you approach the problem from a Bayesian perspective, and calculate a 95% credible interval for the parameter, you get (assuming a non-informative prior) exactly the same interval that you get using the classical approach. So, if I use classical statistics to calculate the 95% confidence interval for (say) the mean of a data set, then it is true that there's a 95% probability that the parameter lies in that interval.
What, precisely, is a confidence interval?
Okay, I realize that when you calculate a 95% confidence interval for a parameter using classical frequentist methods, it doesn't mean that there is a 95% probability that the parameter lies within th
What, precisely, is a confidence interval? Okay, I realize that when you calculate a 95% confidence interval for a parameter using classical frequentist methods, it doesn't mean that there is a 95% probability that the parameter lies within that interval. And yet ... when you approach the problem from a Bayesian perspective, and calculate a 95% credible interval for the parameter, you get (assuming a non-informative prior) exactly the same interval that you get using the classical approach. So, if I use classical statistics to calculate the 95% confidence interval for (say) the mean of a data set, then it is true that there's a 95% probability that the parameter lies in that interval.
What, precisely, is a confidence interval? Okay, I realize that when you calculate a 95% confidence interval for a parameter using classical frequentist methods, it doesn't mean that there is a 95% probability that the parameter lies within th
1,854
What, precisely, is a confidence interval?
You are asking about the Frequentist confidence interval. The definition (note that none of your 2 citation is a definition! Just statements, which both are correct) is: If I had repeated this experiment a big number of times, given this fitted model with this parameter values, in 95% of experiments the estimated value of a parameter would fall within this interval. So you have a model (built using your observed data) and its estimated parameters. Then if you generated some hypothetical data sets according to this model and parameters, the estimated parameters would fall inside the confidence interval. So in fact, this frequentist approach takes the model and estimated parameters as fixed, as given, and treats your data as uncertain - as a random sample of many many other possible data. This is really hard to interpret and this is often used as an argument for Bayesian statistics (which I think can be sometimes little disputable. The bayesian statistics on the other hand takes your data as fixed and treats parameters as uncertain. The bayesian credible intervals are then actually intuitive, as you'd expect: bayesian credible intervals are intervals where with 95% the real parameter value lies. But in practice many people interpret the frequentist confidence intervals in the same way as Bayesian credible intervals and many statisticians don't consider this a big issue - though they all know, it is not 100% correct. Also in practice, the frequentist and bayesian confidence/credible intervals won't differ much, when using bayesian uninformative priors.
What, precisely, is a confidence interval?
You are asking about the Frequentist confidence interval. The definition (note that none of your 2 citation is a definition! Just statements, which both are correct) is: If I had repeated this experi
What, precisely, is a confidence interval? You are asking about the Frequentist confidence interval. The definition (note that none of your 2 citation is a definition! Just statements, which both are correct) is: If I had repeated this experiment a big number of times, given this fitted model with this parameter values, in 95% of experiments the estimated value of a parameter would fall within this interval. So you have a model (built using your observed data) and its estimated parameters. Then if you generated some hypothetical data sets according to this model and parameters, the estimated parameters would fall inside the confidence interval. So in fact, this frequentist approach takes the model and estimated parameters as fixed, as given, and treats your data as uncertain - as a random sample of many many other possible data. This is really hard to interpret and this is often used as an argument for Bayesian statistics (which I think can be sometimes little disputable. The bayesian statistics on the other hand takes your data as fixed and treats parameters as uncertain. The bayesian credible intervals are then actually intuitive, as you'd expect: bayesian credible intervals are intervals where with 95% the real parameter value lies. But in practice many people interpret the frequentist confidence intervals in the same way as Bayesian credible intervals and many statisticians don't consider this a big issue - though they all know, it is not 100% correct. Also in practice, the frequentist and bayesian confidence/credible intervals won't differ much, when using bayesian uninformative priors.
What, precisely, is a confidence interval? You are asking about the Frequentist confidence interval. The definition (note that none of your 2 citation is a definition! Just statements, which both are correct) is: If I had repeated this experi
1,855
What, precisely, is a confidence interval?
Suppose that we want to study the height of men $X$ in Canada for the last 2 years. Assume also that $X \sim N(\mu,\sigma^2)$ with $\sigma^2$ known. Then, talking about the probability \begin{equation*} P(X>1.80)=P(\omega \in \Omega:X(\omega)>1.80) \end{equation*} that a random man from Canada has a height more than $1.80$ makes sense, because $X$ is a random variable. On the other hand, talking about the probability \begin{equation*} \hspace{10mm} P(\mu>1.80)=P(\omega\in \Omega:\mu(\omega)>1.80) \hspace{4mm}(?) \end{equation*} that the average height of men in Canada is more than $1.80$ does not make sense, because the actual parameter is equal to a fixed number and it is not a random variable (if for example we knew that $\mu=1.75$, then $P(\mu>1.80)=0$, because $\mu=1.75$). The average height of men in Canada for the last 2 years is a unique number; it is not possible that the average height of any population takes more than one value. Similarly, if $\{X_1,...,X_n\}$ are i.i.d. random variables from $X$, then $\overline{X}_n=\frac{X_1+...+X_n}{n}$ is also a random variable, so talking about the probability \begin{equation*} P\Big(\overline{X}_n \in \Big[\mu-z_{a/2}\frac{\sigma}{\sqrt{n}},\mu+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \Big)=0.95 \end{equation*} makes sence, while talking about the probability \begin{equation*} P\Big(\mu \in \Big[\overline{X}_n-z_{a/2}\frac{\sigma}{\sqrt{n}},\overline{X}_n+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \Big) \end{equation*} does not, because $\mu$ is not a random variable (we could only say that the last probability is either 1 or 0, because $\mu$ is either contained in the interval or not). However, the probability \begin{equation*} P\Big(\overline{X}_n \in \Big[\mu-z_{a/2}\frac{\sigma}{\sqrt{n}},\mu+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \Big)=0.95 \end{equation*} means that, if we take a random sample $\{x_1,...,x_n\}$ of heights many times and estimate the empirical average height $\overline{x}=\frac{x_1+...+x_n}{n}$ each time, then $\overline{x}$ will be contained in the intervals \begin{equation*} \overline{x} \in \Big[\mu -z_{a/2}\frac{\sigma}{\sqrt{n}},\mu+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \end{equation*} approximately $95\%$ of these times. Equivalently, taking a random sample $\{x_1,...,x_n\}$ from $X$ many times, $\mu$ will be contained in the intervals \begin{equation*} \mu \in \Big[\overline{x} -z_{a/2}\frac{\sigma}{\sqrt{n}},\overline{x}+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \end{equation*} $95\%$ of these times. Note that, each time we take a different sample $\{x_1,...,x_n\}$ from $X$, the confidence interval changes, since its center $\overline{x}=\frac{x_1+...+x_n}{n}$ does. However, the percentage of times that $\mu$ is contained in these different confidence intervals will still be approximately $95 \%$.
What, precisely, is a confidence interval?
Suppose that we want to study the height of men $X$ in Canada for the last 2 years. Assume also that $X \sim N(\mu,\sigma^2)$ with $\sigma^2$ known. Then, talking about the probability \begin{equation
What, precisely, is a confidence interval? Suppose that we want to study the height of men $X$ in Canada for the last 2 years. Assume also that $X \sim N(\mu,\sigma^2)$ with $\sigma^2$ known. Then, talking about the probability \begin{equation*} P(X>1.80)=P(\omega \in \Omega:X(\omega)>1.80) \end{equation*} that a random man from Canada has a height more than $1.80$ makes sense, because $X$ is a random variable. On the other hand, talking about the probability \begin{equation*} \hspace{10mm} P(\mu>1.80)=P(\omega\in \Omega:\mu(\omega)>1.80) \hspace{4mm}(?) \end{equation*} that the average height of men in Canada is more than $1.80$ does not make sense, because the actual parameter is equal to a fixed number and it is not a random variable (if for example we knew that $\mu=1.75$, then $P(\mu>1.80)=0$, because $\mu=1.75$). The average height of men in Canada for the last 2 years is a unique number; it is not possible that the average height of any population takes more than one value. Similarly, if $\{X_1,...,X_n\}$ are i.i.d. random variables from $X$, then $\overline{X}_n=\frac{X_1+...+X_n}{n}$ is also a random variable, so talking about the probability \begin{equation*} P\Big(\overline{X}_n \in \Big[\mu-z_{a/2}\frac{\sigma}{\sqrt{n}},\mu+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \Big)=0.95 \end{equation*} makes sence, while talking about the probability \begin{equation*} P\Big(\mu \in \Big[\overline{X}_n-z_{a/2}\frac{\sigma}{\sqrt{n}},\overline{X}_n+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \Big) \end{equation*} does not, because $\mu$ is not a random variable (we could only say that the last probability is either 1 or 0, because $\mu$ is either contained in the interval or not). However, the probability \begin{equation*} P\Big(\overline{X}_n \in \Big[\mu-z_{a/2}\frac{\sigma}{\sqrt{n}},\mu+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \Big)=0.95 \end{equation*} means that, if we take a random sample $\{x_1,...,x_n\}$ of heights many times and estimate the empirical average height $\overline{x}=\frac{x_1+...+x_n}{n}$ each time, then $\overline{x}$ will be contained in the intervals \begin{equation*} \overline{x} \in \Big[\mu -z_{a/2}\frac{\sigma}{\sqrt{n}},\mu+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \end{equation*} approximately $95\%$ of these times. Equivalently, taking a random sample $\{x_1,...,x_n\}$ from $X$ many times, $\mu$ will be contained in the intervals \begin{equation*} \mu \in \Big[\overline{x} -z_{a/2}\frac{\sigma}{\sqrt{n}},\overline{x}+z_{a/2}\frac{\sigma}{\sqrt{n}}\Big ] \end{equation*} $95\%$ of these times. Note that, each time we take a different sample $\{x_1,...,x_n\}$ from $X$, the confidence interval changes, since its center $\overline{x}=\frac{x_1+...+x_n}{n}$ does. However, the percentage of times that $\mu$ is contained in these different confidence intervals will still be approximately $95 \%$.
What, precisely, is a confidence interval? Suppose that we want to study the height of men $X$ in Canada for the last 2 years. Assume also that $X \sim N(\mu,\sigma^2)$ with $\sigma^2$ known. Then, talking about the probability \begin{equation
1,856
What, precisely, is a confidence interval?
A "confidence interval" is a specific case of the broader concept of a "confidence set", which may or may not be a single connected interval. The broader concept can be conceived mathematically as follows. Suppose we have an observable data vector $\mathbf{X}_n \equiv (X_1,...,X_n)$ from a distribution with unknown parameter $\theta \in \Theta$. Then a confidence set for the parameter $\theta$ is created from a set function $\mathcal{S}$ that satisfies the following conditional probability requirement:$^\dagger$ $$1-\alpha = \mathbb{P}(\theta \in \mathcal{S}(\mathbf{X}_n, \alpha)|\theta) \quad \quad \quad \text{for all } \theta \in \Theta \text{ and } 0 \leqslant \alpha \leqslant 1.$$ Note that if $\theta$ is conceived as a random variable then this requirement also implies the following weaker property pertaining to the marginal probability of inclusion: $$1-\alpha = \mathbb{P}(\theta \in \mathcal{S}(\mathbf{X}_n, \alpha)) \quad \quad \quad \text{for all } 0 \leqslant \alpha \leqslant 1. \quad \quad \quad \quad$$ Now, given a set function that comports to the above conditional probability requirement, for a given value $0 \leqslant \alpha \leqslant 1$, the confidence set for the data $\mathbf{x}_n$ (with confidence level $1-\alpha$) is the fixed set $\mathcal{S}(\mathbf{x}_n, \alpha)$. In the case where this is a single connected interval, we call it a confidence interval. As can be seen, the confidence set is a fixed set determined by the observed data. As such, it is not possible to make any non-degenerate probability statement about its coverage of a fixed parameter. However, if we treat the data as random, we can see that the random confidence set will contain the conditioning value of the parameter $\theta$ with probability equal to the confidence level. This holds regardless of the conditioning value, and so it also holds as a marginal property if $\theta$ is a random variable. As discussed in a related answer, this is an extremely useful and robust property. $^\dagger$ One slight complication here is that we sometimes form confidence intervals that only satisfy this probability requirement under some approximating assumption (e.g., distributional approximation using the central limit theorem). In these cases the probability requirement is satisfied exactly under some simplifying assumption and it holds approximately in broader cases.
What, precisely, is a confidence interval?
A "confidence interval" is a specific case of the broader concept of a "confidence set", which may or may not be a single connected interval. The broader concept can be conceived mathematically as fo
What, precisely, is a confidence interval? A "confidence interval" is a specific case of the broader concept of a "confidence set", which may or may not be a single connected interval. The broader concept can be conceived mathematically as follows. Suppose we have an observable data vector $\mathbf{X}_n \equiv (X_1,...,X_n)$ from a distribution with unknown parameter $\theta \in \Theta$. Then a confidence set for the parameter $\theta$ is created from a set function $\mathcal{S}$ that satisfies the following conditional probability requirement:$^\dagger$ $$1-\alpha = \mathbb{P}(\theta \in \mathcal{S}(\mathbf{X}_n, \alpha)|\theta) \quad \quad \quad \text{for all } \theta \in \Theta \text{ and } 0 \leqslant \alpha \leqslant 1.$$ Note that if $\theta$ is conceived as a random variable then this requirement also implies the following weaker property pertaining to the marginal probability of inclusion: $$1-\alpha = \mathbb{P}(\theta \in \mathcal{S}(\mathbf{X}_n, \alpha)) \quad \quad \quad \text{for all } 0 \leqslant \alpha \leqslant 1. \quad \quad \quad \quad$$ Now, given a set function that comports to the above conditional probability requirement, for a given value $0 \leqslant \alpha \leqslant 1$, the confidence set for the data $\mathbf{x}_n$ (with confidence level $1-\alpha$) is the fixed set $\mathcal{S}(\mathbf{x}_n, \alpha)$. In the case where this is a single connected interval, we call it a confidence interval. As can be seen, the confidence set is a fixed set determined by the observed data. As such, it is not possible to make any non-degenerate probability statement about its coverage of a fixed parameter. However, if we treat the data as random, we can see that the random confidence set will contain the conditioning value of the parameter $\theta$ with probability equal to the confidence level. This holds regardless of the conditioning value, and so it also holds as a marginal property if $\theta$ is a random variable. As discussed in a related answer, this is an extremely useful and robust property. $^\dagger$ One slight complication here is that we sometimes form confidence intervals that only satisfy this probability requirement under some approximating assumption (e.g., distributional approximation using the central limit theorem). In these cases the probability requirement is satisfied exactly under some simplifying assumption and it holds approximately in broader cases.
What, precisely, is a confidence interval? A "confidence interval" is a specific case of the broader concept of a "confidence set", which may or may not be a single connected interval. The broader concept can be conceived mathematically as fo
1,857
Principled way of collapsing categorical variables with many levels?
If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) the coefficients might be exactly the same. So perhaps the regression coefficients for Maths and Physics are the same, but different from those for Chemistry and Biology. In a simplest case, you would have a "one way ANOVA" linear model with a single categorical predictor: $$y_{ij} = \mu + \alpha_i + \epsilon_{ij},$$ where $i$ encodes the level of the categorical variable (the category). But you might prefer a solution that collapses some levels (categories) together, e.g. $$\begin{cases}\alpha_1=\alpha_2, \\ \alpha_3=\alpha_4=\alpha_5.\end{cases}$$ This suggests that one can try to use a regularization penalty that would penalize solutions with differing alphas. One penalty term that immediately comes to mind is $$L=\omega \sum_{i<j}|\alpha_i-\alpha_j|.$$ This resembles lasso and should enforce sparsity of the $\alpha_i-\alpha_j$ differences, which is exactly what you want: you want many of them to be zero. Regularization parameter $\omega$ should be selected with cross-validation. I have never dealt with models like that and the above is the first thing that came to my mind. Then I decided to see if there is something like that implemented. I made some google searches and soon realized that this is called fusion of categories; searching for lasso fusion categorical will give you a lot of references to read. Here are a few that I briefly looked at: Gerhard Tutz, Regression for Categorical Data, see pp. 175-175 in Google Books. Tutz mentions the following four papers: Land and Friedman, 1997, Variable fusion: a new adaptive signal regression method Bondell and Reich, 2009, Simultaneous factor selection and collapsing levels in ANOVA Gertheiss and Tutz, 2010, Sparse modeling of categorial explanatory variables Tibshirani et al. 2005, Sparsity and smoothness via the fused lasso is somewhat relevant even if not exactly the same (it is about ordinal variables) Gertheiss and Tutz 2010, published in the Annals of Applied Statistics, looks like a recent and very readable paper that contains other references. Here is its abstract: Shrinking methods in regression analysis are usually designed for metric predictors. In this article, however, shrinkage methods for categorial predictors are proposed. As an application we consider data from the Munich rent standard, where, for example, urban districts are treated as a categorial predictor. If independent variables are categorial, some modifications to usual shrinking procedures are necessary. Two $L_1$-penalty based methods for factor selection and clustering of categories are presented and investigated. The first approach is designed for nominal scale levels, the second one for ordinal predictors. Besides applying them to the Munich rent standard, methods are illustrated and compared in simulation studies. I like their Lasso-like solution paths that show how levels of two categorical variables get merged together when regularization strength increases:
Principled way of collapsing categorical variables with many levels?
If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) th
Principled way of collapsing categorical variables with many levels? If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) the coefficients might be exactly the same. So perhaps the regression coefficients for Maths and Physics are the same, but different from those for Chemistry and Biology. In a simplest case, you would have a "one way ANOVA" linear model with a single categorical predictor: $$y_{ij} = \mu + \alpha_i + \epsilon_{ij},$$ where $i$ encodes the level of the categorical variable (the category). But you might prefer a solution that collapses some levels (categories) together, e.g. $$\begin{cases}\alpha_1=\alpha_2, \\ \alpha_3=\alpha_4=\alpha_5.\end{cases}$$ This suggests that one can try to use a regularization penalty that would penalize solutions with differing alphas. One penalty term that immediately comes to mind is $$L=\omega \sum_{i<j}|\alpha_i-\alpha_j|.$$ This resembles lasso and should enforce sparsity of the $\alpha_i-\alpha_j$ differences, which is exactly what you want: you want many of them to be zero. Regularization parameter $\omega$ should be selected with cross-validation. I have never dealt with models like that and the above is the first thing that came to my mind. Then I decided to see if there is something like that implemented. I made some google searches and soon realized that this is called fusion of categories; searching for lasso fusion categorical will give you a lot of references to read. Here are a few that I briefly looked at: Gerhard Tutz, Regression for Categorical Data, see pp. 175-175 in Google Books. Tutz mentions the following four papers: Land and Friedman, 1997, Variable fusion: a new adaptive signal regression method Bondell and Reich, 2009, Simultaneous factor selection and collapsing levels in ANOVA Gertheiss and Tutz, 2010, Sparse modeling of categorial explanatory variables Tibshirani et al. 2005, Sparsity and smoothness via the fused lasso is somewhat relevant even if not exactly the same (it is about ordinal variables) Gertheiss and Tutz 2010, published in the Annals of Applied Statistics, looks like a recent and very readable paper that contains other references. Here is its abstract: Shrinking methods in regression analysis are usually designed for metric predictors. In this article, however, shrinkage methods for categorial predictors are proposed. As an application we consider data from the Munich rent standard, where, for example, urban districts are treated as a categorial predictor. If independent variables are categorial, some modifications to usual shrinking procedures are necessary. Two $L_1$-penalty based methods for factor selection and clustering of categories are presented and investigated. The first approach is designed for nominal scale levels, the second one for ordinal predictors. Besides applying them to the Munich rent standard, methods are illustrated and compared in simulation studies. I like their Lasso-like solution paths that show how levels of two categorical variables get merged together when regularization strength increases:
Principled way of collapsing categorical variables with many levels? If I understood correctly, you imagine a linear model where one of the predictors is categorical (e.g. college major); and you expect that for some subgroups of its levels (subgroups of categories) th
1,858
Principled way of collapsing categorical variables with many levels?
I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my equivalent of your major is a random effect. Also, in situations like this there seem to actually be two fusing decisions to make: 1) how to fuse the categories you have when you fit the model, and 2) what fused category becomes "other" where you will by default include any new majors that someone dreams up after you fit your model. (A random effect can handle this second case automatically.) When the fusing has any judgement involved (as opposed to totally automated procedures), I'm skeptical of the "other" category which is often a grab bag of the categories with few things in them rather than any kind of principled grouping. A random effect handles a lot of levels, dynamically pools ("draws strength from") different levels, can predict previously-unseen levels, etc. One downside might be that the distribution of the levels is almost always assumed to be normal.
Principled way of collapsing categorical variables with many levels?
I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my
Principled way of collapsing categorical variables with many levels? I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my equivalent of your major is a random effect. Also, in situations like this there seem to actually be two fusing decisions to make: 1) how to fuse the categories you have when you fit the model, and 2) what fused category becomes "other" where you will by default include any new majors that someone dreams up after you fit your model. (A random effect can handle this second case automatically.) When the fusing has any judgement involved (as opposed to totally automated procedures), I'm skeptical of the "other" category which is often a grab bag of the categories with few things in them rather than any kind of principled grouping. A random effect handles a lot of levels, dynamically pools ("draws strength from") different levels, can predict previously-unseen levels, etc. One downside might be that the distribution of the levels is almost always assumed to be normal.
Principled way of collapsing categorical variables with many levels? I've wrestled with this on a project I've been working on, and at this point I've decided there really isn't a good way to fuse categories and so I'm trying a hierarchical/mixed-effects model where my
1,859
Principled way of collapsing categorical variables with many levels?
If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear discriminant analysis. Suppose that you wanted to map the college major into a single continuous metric, and suppose that a proper anchor is a pre-admission SAT quantitative test score. Compute the mean quantitative score for each major and replace the major with that mean. You can readily extend this to multiple anchors, creating more than one degree of freedom with which to summarize major. Note that unlike some of the earlier suggestions, optimum scoring represents an unsupervised learning approach, so the degrees of freedom (number of parameters estimated against Y) are few and well defined, resulted in proper statistical inference (if frequentist, accurate standard errors, confidence (compatibility) intervals, and p-values). I do very much like the penalization suggestion by https://stats.stackexchange.com/users/28666/amoeba @amoeba.
Principled way of collapsing categorical variables with many levels?
If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear
Principled way of collapsing categorical variables with many levels? If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear discriminant analysis. Suppose that you wanted to map the college major into a single continuous metric, and suppose that a proper anchor is a pre-admission SAT quantitative test score. Compute the mean quantitative score for each major and replace the major with that mean. You can readily extend this to multiple anchors, creating more than one degree of freedom with which to summarize major. Note that unlike some of the earlier suggestions, optimum scoring represents an unsupervised learning approach, so the degrees of freedom (number of parameters estimated against Y) are few and well defined, resulted in proper statistical inference (if frequentist, accurate standard errors, confidence (compatibility) intervals, and p-values). I do very much like the penalization suggestion by https://stats.stackexchange.com/users/28666/amoeba @amoeba.
Principled way of collapsing categorical variables with many levels? If you have an auxiliary independent variable that is logical to use as an anchor for the categorical predictor, consider the use of Fisher's optimum scoring algorithm, which is related to his linear
1,860
Principled way of collapsing categorical variables with many levels?
One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with categorical levels ${z^1, ..., z^K }$, and let $Y$ be the output/target/response variable. Replace $Z$ with $\operatorname{Impact}\left(Z\right)$, where $$ \operatorname{Impact}\left(z^k\right) = \operatorname{E}\left(Y\ |\ Z = z^k\right) - \operatorname{E}\left(Y\right) $$ for a continuous-valued $Y$. For binary-valued $Y$, use $\operatorname{logit} \circ \operatorname{E}$ instead of just $\operatorname{E}$. There is a Python implementation in the category_encoders library [2]. A variant called "impact coding" has been implemented in the R package Vtreat [3][4]. The package (and impact coding itself) is described in an article by those authors from 2016 [5], and in several blog posts [6]. Note that the current R implementation does not handle multinomial (categorical with more than 2 categories) or multivariate (vector-valued) responses. Daniele Micci-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. ACM SIGKDD Explorations Newsletter, Volume 3, Issue 1, July 2001, Pages 27-32. https://doi.org/10.1145/507533.507538 Category Encoders. http://contrib.scikit-learn.org/categorical-encoding/index.html John Mount and Nina Zumel (2017). vtreat: A Statistically Sound 'data.frame' Processor/Conditioner. R package version 0.5.32. https://CRAN.R-project.org/package=vtreat Win-Vector (2017). vtreat. GitHub repository at https://github.com/WinVector/vtreat Zumel, Nina and Mount, John (2016). vtreat: a data.frame Processor for Predictive Modeling. 1611.09477v3, ArXiv e-prints. Available at https://arxiv.org/abs/1611.09477v3. http://www.win-vector.com/blog/tag/vtreat/
Principled way of collapsing categorical variables with many levels?
One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with catego
Principled way of collapsing categorical variables with many levels? One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with categorical levels ${z^1, ..., z^K }$, and let $Y$ be the output/target/response variable. Replace $Z$ with $\operatorname{Impact}\left(Z\right)$, where $$ \operatorname{Impact}\left(z^k\right) = \operatorname{E}\left(Y\ |\ Z = z^k\right) - \operatorname{E}\left(Y\right) $$ for a continuous-valued $Y$. For binary-valued $Y$, use $\operatorname{logit} \circ \operatorname{E}$ instead of just $\operatorname{E}$. There is a Python implementation in the category_encoders library [2]. A variant called "impact coding" has been implemented in the R package Vtreat [3][4]. The package (and impact coding itself) is described in an article by those authors from 2016 [5], and in several blog posts [6]. Note that the current R implementation does not handle multinomial (categorical with more than 2 categories) or multivariate (vector-valued) responses. Daniele Micci-Barreca (2001). A Preprocessing Scheme for High-Cardinality Categorical Attributes in Classification and Prediction Problems. ACM SIGKDD Explorations Newsletter, Volume 3, Issue 1, July 2001, Pages 27-32. https://doi.org/10.1145/507533.507538 Category Encoders. http://contrib.scikit-learn.org/categorical-encoding/index.html John Mount and Nina Zumel (2017). vtreat: A Statistically Sound 'data.frame' Processor/Conditioner. R package version 0.5.32. https://CRAN.R-project.org/package=vtreat Win-Vector (2017). vtreat. GitHub repository at https://github.com/WinVector/vtreat Zumel, Nina and Mount, John (2016). vtreat: a data.frame Processor for Predictive Modeling. 1611.09477v3, ArXiv e-prints. Available at https://arxiv.org/abs/1611.09477v3. http://www.win-vector.com/blog/tag/vtreat/
Principled way of collapsing categorical variables with many levels? One way to handle this situation is to recode the categorical variable into a continuous one, using what is known as "target coding" (aka "impact coding") [1]. Let $Z$ be an input variable with catego
1,861
Principled way of collapsing categorical variables with many levels?
There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regression with sparse feature matrix and the paper by Maechler and Bates. But it might well be that the problem is with modeling, I am not so sure that the usual methods of treating categorical predictor variables really give sufficient guidance when having categorical variables with very many levels, see this site for the tag [many-categories]. There are certainly many ways one could try, one could be (if this is a good idea for your example I cannot know, you didn't tell us your specific application) a kind of hierarchical categorical variable(s), that is, inspired by the system used in biological classification, see https://en.wikipedia.org/wiki/Taxonomy_(biology). There an individual (plant or animal) is classified first to Domain, then Kingdom, Phylum, Class, Order, Family, Genus and finally Species. So for each level in the classification you could create a factor variable. If your levels, are, say, products sold in a supermarket, you could create a hierarchical classification starting with [foodstuff, kitchenware, other], then foodstuff could be classified as [meat, fish, vegetables, cereals, ...] and so on. Just a possibility, which gives a prior hierarchy, not specifically related to the outcome. But you said: I care about producing higher-level categories that are coherent with respect to my regression outcome. Then you could try fused lasso, see other answers in this thread, which could be seen as a way of collapsing the levels into larger groups, entirely based on the data, not a prior organization of the levels as implied by my proposal of a hierarchical organization of the levels.
Principled way of collapsing categorical variables with many levels?
There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regr
Principled way of collapsing categorical variables with many levels? There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regression with sparse feature matrix and the paper by Maechler and Bates. But it might well be that the problem is with modeling, I am not so sure that the usual methods of treating categorical predictor variables really give sufficient guidance when having categorical variables with very many levels, see this site for the tag [many-categories]. There are certainly many ways one could try, one could be (if this is a good idea for your example I cannot know, you didn't tell us your specific application) a kind of hierarchical categorical variable(s), that is, inspired by the system used in biological classification, see https://en.wikipedia.org/wiki/Taxonomy_(biology). There an individual (plant or animal) is classified first to Domain, then Kingdom, Phylum, Class, Order, Family, Genus and finally Species. So for each level in the classification you could create a factor variable. If your levels, are, say, products sold in a supermarket, you could create a hierarchical classification starting with [foodstuff, kitchenware, other], then foodstuff could be classified as [meat, fish, vegetables, cereals, ...] and so on. Just a possibility, which gives a prior hierarchy, not specifically related to the outcome. But you said: I care about producing higher-level categories that are coherent with respect to my regression outcome. Then you could try fused lasso, see other answers in this thread, which could be seen as a way of collapsing the levels into larger groups, entirely based on the data, not a prior organization of the levels as implied by my proposal of a hierarchical organization of the levels.
Principled way of collapsing categorical variables with many levels? There are multiple questions here, and some of them are asked & answered earlier. If the problem is computation taking a long time: There are multiple methods to deal with that, see large scale regr
1,862
Principled way of collapsing categorical variables with many levels?
The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empirical Bayes' scheme at every pool/level to map the categorical variable into a posterior class probability, which can be used directly or as an input into other models.
Principled way of collapsing categorical variables with many levels?
The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empiri
Principled way of collapsing categorical variables with many levels? The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empirical Bayes' scheme at every pool/level to map the categorical variable into a posterior class probability, which can be used directly or as an input into other models.
Principled way of collapsing categorical variables with many levels? The paper "A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems" leverages hierarchical structure in the category attributes in a nested 'empiri
1,863
What is meant by a "random variable"?
A random variable is a variable whose value depends on unknown events. We can summarize the unknown events as "state", and then the random variable is a function of the state. Example: Suppose we have three dice rolls ($D_{1}$,$D_{2}$,$D_{3}$). Then the state $S=(D_{1},D_{2},D_{3})$. One random variable $X$ is the number of 5s. This is: $$ X=(D_{1}=5?)+(D_{2}=5?)+(D_{3}=5?)$$ Another random variable $Y$ is the sum of the dice rolls. This is: $$ Y=D_{1}+D_{2}+D_{3} $$
What is meant by a "random variable"?
A random variable is a variable whose value depends on unknown events. We can summarize the unknown events as "state", and then the random variable is a function of the state. Example: Suppose we h
What is meant by a "random variable"? A random variable is a variable whose value depends on unknown events. We can summarize the unknown events as "state", and then the random variable is a function of the state. Example: Suppose we have three dice rolls ($D_{1}$,$D_{2}$,$D_{3}$). Then the state $S=(D_{1},D_{2},D_{3})$. One random variable $X$ is the number of 5s. This is: $$ X=(D_{1}=5?)+(D_{2}=5?)+(D_{3}=5?)$$ Another random variable $Y$ is the sum of the dice rolls. This is: $$ Y=D_{1}+D_{2}+D_{3} $$
What is meant by a "random variable"? A random variable is a variable whose value depends on unknown events. We can summarize the unknown events as "state", and then the random variable is a function of the state. Example: Suppose we h
1,864
What is meant by a "random variable"?
Introduction In thinking over a recent comment, I notice that all replies so far suffer from the use of undefined terms like "variable" and vague terms like "unknown," or appeal to technical mathematical concepts like "function" and "probability space." What should we say to the non-mathematical person who would like a plain, intuitive, yet accurate definition of "random variable"? After some preliminaries describing a simple model of random phenomena, I provide such a definition that is short enough to fit on one line. Because it might not fully satisfy the cognoscenti, an afterward explains how to extend this to the usual technical definition. Tickets in a box One way to approach the idea behind a random variable is to appeal to the tickets-in-a-box model of randomness. This model replaces an experiment or observation by a box full of tickets. On each ticket is written a possible outcome of the experiment. (An outcome can be as simple as "heads" or "tails" but in practice it is a more complex thing, such as a history of stock prices, a complete record of a long experiment, or the sequence of all words in a document.) All possible outcomes appear at least once among the tickets; some outcomes may appear on many tickets. Instead of actually conducting the experiment, we imagine thoroughly--but blindly--mixing all the tickets and selecting just one. If we can show that the real experiment should behave as if it were conducted in this way, then we have reduced a potentially complicated (and expensive, and lengthy) real-world experiment to a simple, intuitive, thought experiment (or "statistical model"). The clarity and simplicity afforded by this model makes it possible to analyze the experiment. An example Standard examples concern outcomes of tossing coins and dice and drawing playing cards. These are somewhat distracting for their triviality, so to illustrate, suppose we are concerned about the outcome of the US presidential election in 2016. As a (tiny) simplification, I will assume that one of the two major parties--Republican (R) or Democratic (D)--will win. Because (with the information presently available) the outcome is uncertain, we imagine putting tickets into a box: some with "R" written on them and others with "D". Our model of the outcome is to draw exactly one ticket from this box. There is something missing: we haven't yet stipulated how many tickets there will be for each outcome. In fact, finding this out is the principal problem of statistics: based on observations (and theory), what can be said about the relative proportions of each outcome in the box? (I hope it's clear that the proportions of each kind of ticket in the box determine its properties, rather than the actual numbers of each ticket. The proportions are defined--as usual--to be the count of each kind of ticket divided by the total number of tickets. For instance, a box with one "D" ticket and one "R" ticket behaves exactly like a box with a million "D" tickets and a million "R" tickets, because in either case each type is 50% of all the tickets and therefore each has a 50% chance of being drawn when the tickets are thoroughly mixed.) Making the model quantitative But let's not pursue this question here, because we are near our goal of defining a random variable. The problem with the model so far is that it is not quantifiable, whereas we would like to be able to answer quantitative questions with it. And I don't mean trivial ones, either, but real, practical questions such as "if my company has a billion Euros invested in US offshore fossil fuel development, how much will the value of this investment change as a result of the 2016 election?" In this case the model is so simple that there's not much we can do to get a realistic answer to this question, but we could go so far as to consult our economic staff and ask for their opinions about the two possible outcomes: If the Democrats win, how much will the investment change? (Suppose the answer is $d$ dollars.) If the Republicans win, how much will it change? (Suppose the answer is $r$ dollars.) The answers are numbers. To use them in the model, I will ask my staff to go through all the tickets in the box and on every "D" ticket to write "$d$ dollars" and on every "R" ticket to write "$r$ dollars." Now we can model the uncertainty in the investment clearly and quantitatively: its post-election change in value is the same as receiving the amount of money written on a single ticket drawn randomly from this box. This model helps us answer additional questions about the investment. For instance, how uncertain should we be about the investment's value? Although there are (simple) mathematical formulas for this uncertainty, we could reproduce their answers reasonably accurately just by using our model repeatedly--maybe a thousand times over--to see what kinds of outcomes actually occur and measuring their spread. A tickets-in-a-box model gives us a way to reason quantitatively about uncertain outcomes. Random variables To obtain quantitative answers about uncertain or variable phenomena, we can adopt a ticket-in-a-box model and write numbers on the tickets. This process of writing numbers has to follow only a single rule: it must be consistent. In the example, every Democratic ticket has to have "$d$ dollars" written on it--no exceptions--and every Republican ticket has to have "$r$ dollars" written on it. A random variable is any consistent way to write numbers on tickets in a box. (The mathematical notation for this is to give a name to the renumbering process, typically with a capital latin letter like $X$ or $Y$. The identifying information written on the tickets is often named with little letters, typically $\omega$ (lower case Greek "omega"). The value associated by means of the random variable $X$ to the ticket $\omega$ is denoted $X(\omega)$. In the example, then, we might say something like "$X$ is a random variable representing the change in the investment's value." It would be fully specified by stating $X(\text{D})=d$ and $X(\text{R}) = r$. In more complicated cases, the values of $X$ are given by more complicated descriptions and, often, by formulas. For instance, the tickets might represent a year's worth of closing prices of a stock and the random variable $X$ might be the value at a particular time of some derivative on that stock, such as a put option. The option contract describes how $X$ is computed. Options traders use exactly this kind of model to price their products.) Did you notice that such an $X$ is neither random nor a variable? Neither is it "uncertain" or "unknown." It is a definite assignment (of numbers to outcomes), something we can write down with full knowledge and complete certainty. What is random is the process of drawing a ticket from the box; what is variable is the value on the ticket that might be drawn. Notice, too, the clean separation of two different issues involved in evaluating the investment: I asked my economists to determine $X$ for me, but not to opine about the election outcome. I will use other information (perhaps by calling in political consultants, astrologers, using a Ouija board, or whatever) to estimate the proportions of each of the "D" and "R" tickets to put in the box. Afterward: about measurability When the definition of random variable is accompanied with the caveat "measurable," what the definer has in mind is a generalization of the tickets-in-a-box model to situations with infinitely many possible outcomes. (Technically, it is needed only with uncountably infinite outcomes or where irrational probabilities are involved, and even in the latter case can be avoided.) With infinitely many outcomes it is difficult to say what the proportion of the total would be. If there are infinitely many "D" tickets and infinitely many "R" tickets, what are their relative proportions? We can't find out with a mere division of one infinity by another! In these cases, we need a different way to specify the proportions. A "measurable" set of tickets is any collection of tickets in the box for which their proportion can be defined. When this is done, the number we have been thinking of as a "proportion" is called the "probability." (Not every collection of tickets need have a probability associated with it.) In addition to satisfying the consistency requirement, a random variable $X$ has to allow us to compute probabilities that are associated with natural questions about the outcomes. Specifically, we want assurance that questions of the form "what is the chance that the value $X(\omega)$ will lie between such-and-such ($a$) and such-and-such ($b$)?" will actually have mathematically well-defined answers, no matter what two values we give for the limits $a$ and $b$. Such rewriting procedures are said to be "measurable." All random variables must be measurable, by definition.
What is meant by a "random variable"?
Introduction In thinking over a recent comment, I notice that all replies so far suffer from the use of undefined terms like "variable" and vague terms like "unknown," or appeal to technical mathemati
What is meant by a "random variable"? Introduction In thinking over a recent comment, I notice that all replies so far suffer from the use of undefined terms like "variable" and vague terms like "unknown," or appeal to technical mathematical concepts like "function" and "probability space." What should we say to the non-mathematical person who would like a plain, intuitive, yet accurate definition of "random variable"? After some preliminaries describing a simple model of random phenomena, I provide such a definition that is short enough to fit on one line. Because it might not fully satisfy the cognoscenti, an afterward explains how to extend this to the usual technical definition. Tickets in a box One way to approach the idea behind a random variable is to appeal to the tickets-in-a-box model of randomness. This model replaces an experiment or observation by a box full of tickets. On each ticket is written a possible outcome of the experiment. (An outcome can be as simple as "heads" or "tails" but in practice it is a more complex thing, such as a history of stock prices, a complete record of a long experiment, or the sequence of all words in a document.) All possible outcomes appear at least once among the tickets; some outcomes may appear on many tickets. Instead of actually conducting the experiment, we imagine thoroughly--but blindly--mixing all the tickets and selecting just one. If we can show that the real experiment should behave as if it were conducted in this way, then we have reduced a potentially complicated (and expensive, and lengthy) real-world experiment to a simple, intuitive, thought experiment (or "statistical model"). The clarity and simplicity afforded by this model makes it possible to analyze the experiment. An example Standard examples concern outcomes of tossing coins and dice and drawing playing cards. These are somewhat distracting for their triviality, so to illustrate, suppose we are concerned about the outcome of the US presidential election in 2016. As a (tiny) simplification, I will assume that one of the two major parties--Republican (R) or Democratic (D)--will win. Because (with the information presently available) the outcome is uncertain, we imagine putting tickets into a box: some with "R" written on them and others with "D". Our model of the outcome is to draw exactly one ticket from this box. There is something missing: we haven't yet stipulated how many tickets there will be for each outcome. In fact, finding this out is the principal problem of statistics: based on observations (and theory), what can be said about the relative proportions of each outcome in the box? (I hope it's clear that the proportions of each kind of ticket in the box determine its properties, rather than the actual numbers of each ticket. The proportions are defined--as usual--to be the count of each kind of ticket divided by the total number of tickets. For instance, a box with one "D" ticket and one "R" ticket behaves exactly like a box with a million "D" tickets and a million "R" tickets, because in either case each type is 50% of all the tickets and therefore each has a 50% chance of being drawn when the tickets are thoroughly mixed.) Making the model quantitative But let's not pursue this question here, because we are near our goal of defining a random variable. The problem with the model so far is that it is not quantifiable, whereas we would like to be able to answer quantitative questions with it. And I don't mean trivial ones, either, but real, practical questions such as "if my company has a billion Euros invested in US offshore fossil fuel development, how much will the value of this investment change as a result of the 2016 election?" In this case the model is so simple that there's not much we can do to get a realistic answer to this question, but we could go so far as to consult our economic staff and ask for their opinions about the two possible outcomes: If the Democrats win, how much will the investment change? (Suppose the answer is $d$ dollars.) If the Republicans win, how much will it change? (Suppose the answer is $r$ dollars.) The answers are numbers. To use them in the model, I will ask my staff to go through all the tickets in the box and on every "D" ticket to write "$d$ dollars" and on every "R" ticket to write "$r$ dollars." Now we can model the uncertainty in the investment clearly and quantitatively: its post-election change in value is the same as receiving the amount of money written on a single ticket drawn randomly from this box. This model helps us answer additional questions about the investment. For instance, how uncertain should we be about the investment's value? Although there are (simple) mathematical formulas for this uncertainty, we could reproduce their answers reasonably accurately just by using our model repeatedly--maybe a thousand times over--to see what kinds of outcomes actually occur and measuring their spread. A tickets-in-a-box model gives us a way to reason quantitatively about uncertain outcomes. Random variables To obtain quantitative answers about uncertain or variable phenomena, we can adopt a ticket-in-a-box model and write numbers on the tickets. This process of writing numbers has to follow only a single rule: it must be consistent. In the example, every Democratic ticket has to have "$d$ dollars" written on it--no exceptions--and every Republican ticket has to have "$r$ dollars" written on it. A random variable is any consistent way to write numbers on tickets in a box. (The mathematical notation for this is to give a name to the renumbering process, typically with a capital latin letter like $X$ or $Y$. The identifying information written on the tickets is often named with little letters, typically $\omega$ (lower case Greek "omega"). The value associated by means of the random variable $X$ to the ticket $\omega$ is denoted $X(\omega)$. In the example, then, we might say something like "$X$ is a random variable representing the change in the investment's value." It would be fully specified by stating $X(\text{D})=d$ and $X(\text{R}) = r$. In more complicated cases, the values of $X$ are given by more complicated descriptions and, often, by formulas. For instance, the tickets might represent a year's worth of closing prices of a stock and the random variable $X$ might be the value at a particular time of some derivative on that stock, such as a put option. The option contract describes how $X$ is computed. Options traders use exactly this kind of model to price their products.) Did you notice that such an $X$ is neither random nor a variable? Neither is it "uncertain" or "unknown." It is a definite assignment (of numbers to outcomes), something we can write down with full knowledge and complete certainty. What is random is the process of drawing a ticket from the box; what is variable is the value on the ticket that might be drawn. Notice, too, the clean separation of two different issues involved in evaluating the investment: I asked my economists to determine $X$ for me, but not to opine about the election outcome. I will use other information (perhaps by calling in political consultants, astrologers, using a Ouija board, or whatever) to estimate the proportions of each of the "D" and "R" tickets to put in the box. Afterward: about measurability When the definition of random variable is accompanied with the caveat "measurable," what the definer has in mind is a generalization of the tickets-in-a-box model to situations with infinitely many possible outcomes. (Technically, it is needed only with uncountably infinite outcomes or where irrational probabilities are involved, and even in the latter case can be avoided.) With infinitely many outcomes it is difficult to say what the proportion of the total would be. If there are infinitely many "D" tickets and infinitely many "R" tickets, what are their relative proportions? We can't find out with a mere division of one infinity by another! In these cases, we need a different way to specify the proportions. A "measurable" set of tickets is any collection of tickets in the box for which their proportion can be defined. When this is done, the number we have been thinking of as a "proportion" is called the "probability." (Not every collection of tickets need have a probability associated with it.) In addition to satisfying the consistency requirement, a random variable $X$ has to allow us to compute probabilities that are associated with natural questions about the outcomes. Specifically, we want assurance that questions of the form "what is the chance that the value $X(\omega)$ will lie between such-and-such ($a$) and such-and-such ($b$)?" will actually have mathematically well-defined answers, no matter what two values we give for the limits $a$ and $b$. Such rewriting procedures are said to be "measurable." All random variables must be measurable, by definition.
What is meant by a "random variable"? Introduction In thinking over a recent comment, I notice that all replies so far suffer from the use of undefined terms like "variable" and vague terms like "unknown," or appeal to technical mathemati
1,865
What is meant by a "random variable"?
Informally, a random variable is a way to assign a numerical code to each possible outcome.* Example 1 I flip a coin. The set of possible outcomes (also called the "sample space") may be written as $\{H,T\}$. An example of a random variable $X$ might assign $X(H)=1$ and $X(T)=0$. That is, heads is "coded" as $1$ and tails is "coded" as $0$. Example 2 I draw a card from a standard 52-card deck. The set of possible outcomes is $$\{A♠, K♠, \dots, 2♠, A♡, K♡, \dots, 2♡, A♢, K♢, \dots, 2♢, A♣, K♣, \dots, 2♣ \}.$$ In bridge, an ace is worth 4 high card points, a king 3, a queen 2, and a jack 1. Any other card is worth 0 points. So we might let $Y$ be the corresponding random variable, where for example $Y\left(A♡ \right)=4$, $Y\left(J♣ \right)=1$, and $Y\left(7♠ \right)=0$. What's the point of random variables? One simple answer is that abstract symbols like "$H$", "$T$" or "$A♠$" are sometimes difficult and troublesome to handle. So we instead translate them into numbers, which are easier to manipulate. $$$$ *Formally a random variable is a function that maps each outcome (in the sample space) to a real number.
What is meant by a "random variable"?
Informally, a random variable is a way to assign a numerical code to each possible outcome.* Example 1 I flip a coin. The set of possible outcomes (also called the "sample space") may be written as $\
What is meant by a "random variable"? Informally, a random variable is a way to assign a numerical code to each possible outcome.* Example 1 I flip a coin. The set of possible outcomes (also called the "sample space") may be written as $\{H,T\}$. An example of a random variable $X$ might assign $X(H)=1$ and $X(T)=0$. That is, heads is "coded" as $1$ and tails is "coded" as $0$. Example 2 I draw a card from a standard 52-card deck. The set of possible outcomes is $$\{A♠, K♠, \dots, 2♠, A♡, K♡, \dots, 2♡, A♢, K♢, \dots, 2♢, A♣, K♣, \dots, 2♣ \}.$$ In bridge, an ace is worth 4 high card points, a king 3, a queen 2, and a jack 1. Any other card is worth 0 points. So we might let $Y$ be the corresponding random variable, where for example $Y\left(A♡ \right)=4$, $Y\left(J♣ \right)=1$, and $Y\left(7♠ \right)=0$. What's the point of random variables? One simple answer is that abstract symbols like "$H$", "$T$" or "$A♠$" are sometimes difficult and troublesome to handle. So we instead translate them into numbers, which are easier to manipulate. $$$$ *Formally a random variable is a function that maps each outcome (in the sample space) to a real number.
What is meant by a "random variable"? Informally, a random variable is a way to assign a numerical code to each possible outcome.* Example 1 I flip a coin. The set of possible outcomes (also called the "sample space") may be written as $\
1,866
What is meant by a "random variable"?
I was told this story: A random variable can be compared with the holy roman empire: The Holy Roman Empire was not holy, it was not roman, and it was not an empire. In the same way, a Random Variable is neither random, nor a variable. It is just a function. (the story was told here: source). This is at least a quippy way to explain, which might help people remember!
What is meant by a "random variable"?
I was told this story: A random variable can be compared with the holy roman empire: The Holy Roman Empire was not holy, it was not roman, and it was not an empire. In the same way, a Random Va
What is meant by a "random variable"? I was told this story: A random variable can be compared with the holy roman empire: The Holy Roman Empire was not holy, it was not roman, and it was not an empire. In the same way, a Random Variable is neither random, nor a variable. It is just a function. (the story was told here: source). This is at least a quippy way to explain, which might help people remember!
What is meant by a "random variable"? I was told this story: A random variable can be compared with the holy roman empire: The Holy Roman Empire was not holy, it was not roman, and it was not an empire. In the same way, a Random Va
1,867
What is meant by a "random variable"?
Unlike a regular variable, a random variable may not be substituted for a single, unchanging value. Rather statistical properties such as the distribution of the random variable may be stated. The distribution is a function that provides the probability the variable will take on a given value, or fall within a range given certain parameters such as the mean or standard deviation. Random variables may be classified as discrete if the distribution describes values from a countable set, such as the integers. The other classification for a random variable is continuous and is used if the distribution covers values from an uncountable set such as the real numbers.
What is meant by a "random variable"?
Unlike a regular variable, a random variable may not be substituted for a single, unchanging value. Rather statistical properties such as the distribution of the random variable may be stated. The d
What is meant by a "random variable"? Unlike a regular variable, a random variable may not be substituted for a single, unchanging value. Rather statistical properties such as the distribution of the random variable may be stated. The distribution is a function that provides the probability the variable will take on a given value, or fall within a range given certain parameters such as the mean or standard deviation. Random variables may be classified as discrete if the distribution describes values from a countable set, such as the integers. The other classification for a random variable is continuous and is used if the distribution covers values from an uncountable set such as the real numbers.
What is meant by a "random variable"? Unlike a regular variable, a random variable may not be substituted for a single, unchanging value. Rather statistical properties such as the distribution of the random variable may be stated. The d
1,868
What is meant by a "random variable"?
A random variable, usually denoted X, is a variable where the outcome is uncertain. The observation of a particular outcome of this variable is called a realisation. More concretely, it is a function which maps a probability space into a measurable space, usually called a state space. Random variables are discrete (can take a number of distinct values) or continuous (can take an infinite number of values). Consider the random variable X which is the total obtained when rolling two dice. It can take any of the values 2-12 (with equal probability given fair dice) and the outcome is uncertain until the dice are rolled.
What is meant by a "random variable"?
A random variable, usually denoted X, is a variable where the outcome is uncertain. The observation of a particular outcome of this variable is called a realisation. More concretely, it is a function
What is meant by a "random variable"? A random variable, usually denoted X, is a variable where the outcome is uncertain. The observation of a particular outcome of this variable is called a realisation. More concretely, it is a function which maps a probability space into a measurable space, usually called a state space. Random variables are discrete (can take a number of distinct values) or continuous (can take an infinite number of values). Consider the random variable X which is the total obtained when rolling two dice. It can take any of the values 2-12 (with equal probability given fair dice) and the outcome is uncertain until the dice are rolled.
What is meant by a "random variable"? A random variable, usually denoted X, is a variable where the outcome is uncertain. The observation of a particular outcome of this variable is called a realisation. More concretely, it is a function
1,869
What is meant by a "random variable"?
From Wikipedia: In mathematics (especially probability theory and statistics), a random variable (or stochastic variable) is (in general) a measurable function that maps a probability space into a measurable space. Random variables mapping all possible outcomes of an event into the real numbers are frequently studied in elementary statistics and used in the sciences to make predictions based on data obtained from scientific experiments. In addition to scientific applications, random variables were developed for the analysis of games of chance and stochastic events. The utility of random variables comes from their ability to capture only the mathematical properties necessary to answer probabilistic questions. From cnx.org: A random variable is a function, which assigns unique numerical values to all possible outcomes of a random experiment under fixed conditions. A random variable is not a variable but rather a function that maps events to numbers.
What is meant by a "random variable"?
From Wikipedia: In mathematics (especially probability theory and statistics), a random variable (or stochastic variable) is (in general) a measurable function that maps a probability space
What is meant by a "random variable"? From Wikipedia: In mathematics (especially probability theory and statistics), a random variable (or stochastic variable) is (in general) a measurable function that maps a probability space into a measurable space. Random variables mapping all possible outcomes of an event into the real numbers are frequently studied in elementary statistics and used in the sciences to make predictions based on data obtained from scientific experiments. In addition to scientific applications, random variables were developed for the analysis of games of chance and stochastic events. The utility of random variables comes from their ability to capture only the mathematical properties necessary to answer probabilistic questions. From cnx.org: A random variable is a function, which assigns unique numerical values to all possible outcomes of a random experiment under fixed conditions. A random variable is not a variable but rather a function that maps events to numbers.
What is meant by a "random variable"? From Wikipedia: In mathematics (especially probability theory and statistics), a random variable (or stochastic variable) is (in general) a measurable function that maps a probability space
1,870
What is meant by a "random variable"?
The sample space may be a set of arbitrary elements, e.g. $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$. But this freedom of possible outcomes is difficult to work with. So someone invented the idea of working not with arbitrary elements, but only with real numbers. To reach it, the first thing is to map such elements to real numbers, e.g. \begin{aligned} \color{red} {\text{red }} &\mapsto 6.72\\ \color{green} {\text{green}} &\mapsto -2\\ \color{blue} {\text{blue}} &\mapsto 19.5 \end{aligned} And that mapping is called a random variable. After choosing that mapping we get rid of problems how to operate with arbitrary things, because now we may do various calculations with numbers.
What is meant by a "random variable"?
The sample space may be a set of arbitrary elements, e.g. $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$. But this freedom of possible outcomes is difficult t
What is meant by a "random variable"? The sample space may be a set of arbitrary elements, e.g. $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$. But this freedom of possible outcomes is difficult to work with. So someone invented the idea of working not with arbitrary elements, but only with real numbers. To reach it, the first thing is to map such elements to real numbers, e.g. \begin{aligned} \color{red} {\text{red }} &\mapsto 6.72\\ \color{green} {\text{green}} &\mapsto -2\\ \color{blue} {\text{blue}} &\mapsto 19.5 \end{aligned} And that mapping is called a random variable. After choosing that mapping we get rid of problems how to operate with arbitrary things, because now we may do various calculations with numbers.
What is meant by a "random variable"? The sample space may be a set of arbitrary elements, e.g. $\{\color{red} {\text{red}}, \color{green} {\text{green}}, \color{blue} {\text{blue}}\}$. But this freedom of possible outcomes is difficult t
1,871
What is meant by a "random variable"?
In the my non-math university studies, we were told that random variable is a map from values that variable can take to the probabilities. This allowed to draw the probability distributions Recently, I have realized how different is that from what mathematicians do have in mind. It turns out that by the random variable they mean a simple function $X: \Omega \to \mathbb R,$ which takes an element of sample space $\Omega$ (aka outcome, ticket or individual, as explained above) and translates it into a real number $\mathbb R$ in range $(-\infty, \infty).$ That is, it was aptly noted above that it is not random and no variable at all. The randomness usually comes with probability measure $P,$ as part of measure space ($\Omega, P). ~P$ maps samples to $\mathbb R,$ similarly to random variable but this time range limited to $[0,1] $ and we can say that random variable translates $(\Omega, P)$ into $(\mathbb R, P),$ an thus, random variable is equipped with probability measure $P: \mathbb R \to [0,1]$ so that you can say for every $x \in \mathbb R$ what is the probability of its occurrence. I do not know why do you need these kind of random variables and why cannot you sample the elements of R in the first place but it seems that translating samples to numeric values allows us to order the samples, draw the distribution and compute the expectation. I have got this idea reading A Measure Theory Tutorial (Measure Theory for Dummies) Might be mathematicians have better applications of random variable in mind, but I cannot find them in my superfluous study. The very same text suggests that you do not need converting samples into numbers always, particularly, to compute entropy for alphabet $\Omega$ $$H(\Omega) = \sum{P(\Omega_i) \ln (\Omega_i)}$$ integral does not need any real values of random variable.
What is meant by a "random variable"?
In the my non-math university studies, we were told that random variable is a map from values that variable can take to the probabilities. This allowed to draw the probability distributions Recently,
What is meant by a "random variable"? In the my non-math university studies, we were told that random variable is a map from values that variable can take to the probabilities. This allowed to draw the probability distributions Recently, I have realized how different is that from what mathematicians do have in mind. It turns out that by the random variable they mean a simple function $X: \Omega \to \mathbb R,$ which takes an element of sample space $\Omega$ (aka outcome, ticket or individual, as explained above) and translates it into a real number $\mathbb R$ in range $(-\infty, \infty).$ That is, it was aptly noted above that it is not random and no variable at all. The randomness usually comes with probability measure $P,$ as part of measure space ($\Omega, P). ~P$ maps samples to $\mathbb R,$ similarly to random variable but this time range limited to $[0,1] $ and we can say that random variable translates $(\Omega, P)$ into $(\mathbb R, P),$ an thus, random variable is equipped with probability measure $P: \mathbb R \to [0,1]$ so that you can say for every $x \in \mathbb R$ what is the probability of its occurrence. I do not know why do you need these kind of random variables and why cannot you sample the elements of R in the first place but it seems that translating samples to numeric values allows us to order the samples, draw the distribution and compute the expectation. I have got this idea reading A Measure Theory Tutorial (Measure Theory for Dummies) Might be mathematicians have better applications of random variable in mind, but I cannot find them in my superfluous study. The very same text suggests that you do not need converting samples into numbers always, particularly, to compute entropy for alphabet $\Omega$ $$H(\Omega) = \sum{P(\Omega_i) \ln (\Omega_i)}$$ integral does not need any real values of random variable.
What is meant by a "random variable"? In the my non-math university studies, we were told that random variable is a map from values that variable can take to the probabilities. This allowed to draw the probability distributions Recently,
1,872
What is meant by a "random variable"?
A random variable is a measurable function defined on a probability space: If $\left(\Omega,\mathcal F, \mathbb P\right)$ is a probability space and $\left(S, \mathcal A \right)$ a measurable space, then any $\mathcal F/\mathcal A$-measurable function $Y: \Omega \to S$ is called a ($S$-valued) random variable. Some authors use a less general definition in which $\left(S, \mathcal A \right) \equiv \left(\mathbb R, \mathop{\mathcal B}\left(\mathbb R\right) \right)$ is required, where $\mathop{\mathcal B}\left(\mathbb R\right)$ is the Borel $\sigma$-algebra on $\mathbb R$.
What is meant by a "random variable"?
A random variable is a measurable function defined on a probability space: If $\left(\Omega,\mathcal F, \mathbb P\right)$ is a probability space and $\left(S, \mathcal A \right)$ a measurable space, t
What is meant by a "random variable"? A random variable is a measurable function defined on a probability space: If $\left(\Omega,\mathcal F, \mathbb P\right)$ is a probability space and $\left(S, \mathcal A \right)$ a measurable space, then any $\mathcal F/\mathcal A$-measurable function $Y: \Omega \to S$ is called a ($S$-valued) random variable. Some authors use a less general definition in which $\left(S, \mathcal A \right) \equiv \left(\mathbb R, \mathop{\mathcal B}\left(\mathbb R\right) \right)$ is required, where $\mathop{\mathcal B}\left(\mathbb R\right)$ is the Borel $\sigma$-algebra on $\mathbb R$.
What is meant by a "random variable"? A random variable is a measurable function defined on a probability space: If $\left(\Omega,\mathcal F, \mathbb P\right)$ is a probability space and $\left(S, \mathcal A \right)$ a measurable space, t
1,873
Locating freely available data samples
Also see the UCI machine learning Data Repository. http://archive.ics.uci.edu/ml/
Locating freely available data samples
Also see the UCI machine learning Data Repository. http://archive.ics.uci.edu/ml/
Locating freely available data samples Also see the UCI machine learning Data Repository. http://archive.ics.uci.edu/ml/
Locating freely available data samples Also see the UCI machine learning Data Repository. http://archive.ics.uci.edu/ml/
1,874
Locating freely available data samples
The following list contains many data sets you may be interested: America's Best Colleges - U.S. News & World Reports American FactFinder The Baseball Archive The Bureau of Justice Statistics The Bureau of Labor Statistics The Bureau of Transportation Statistics The Census Bureau Data and Story Library (DASL) Data Sets, UCLA Statistics Department DIG Stats Economic Research Service, US Department of Agriculture Energy Information Administration Eurostat Exploring Data FedStats The Gallop Organization International Fuel Prices Journal of Statistics Education Data Archive Kentucky Derby Race Statistics National Center for Education Statistics National Center for Health Statistics National Climatic Data Center National Geophysical Data Center National Oceanic and Atmospheric Administration Sports Data Resources Statistics Canada StatLib---Datasets Archive UK Government Statistical Service United Nations: Cyber SchoolBus Resources
Locating freely available data samples
The following list contains many data sets you may be interested: America's Best Colleges - U.S. News & World Reports American FactFinder The Baseball Archive The Bureau of Justice Statistics The Bur
Locating freely available data samples The following list contains many data sets you may be interested: America's Best Colleges - U.S. News & World Reports American FactFinder The Baseball Archive The Bureau of Justice Statistics The Bureau of Labor Statistics The Bureau of Transportation Statistics The Census Bureau Data and Story Library (DASL) Data Sets, UCLA Statistics Department DIG Stats Economic Research Service, US Department of Agriculture Energy Information Administration Eurostat Exploring Data FedStats The Gallop Organization International Fuel Prices Journal of Statistics Education Data Archive Kentucky Derby Race Statistics National Center for Education Statistics National Center for Health Statistics National Climatic Data Center National Geophysical Data Center National Oceanic and Atmospheric Administration Sports Data Resources Statistics Canada StatLib---Datasets Archive UK Government Statistical Service United Nations: Cyber SchoolBus Resources
Locating freely available data samples The following list contains many data sets you may be interested: America's Best Colleges - U.S. News & World Reports American FactFinder The Baseball Archive The Bureau of Justice Statistics The Bur
1,875
Locating freely available data samples
See my response to "Datasets for Running Statistical Analysis on" in reference to datasets in R.
Locating freely available data samples
See my response to "Datasets for Running Statistical Analysis on" in reference to datasets in R.
Locating freely available data samples See my response to "Datasets for Running Statistical Analysis on" in reference to datasets in R.
Locating freely available data samples See my response to "Datasets for Running Statistical Analysis on" in reference to datasets in R.
1,876
Locating freely available data samples
World Bank offers quite a lot of interesting data and has been recently very active in developing nice API for it. Also, commugrate project has an interesting list available. For US health related data head for Health Indicators Warehouse. Daniel Lemire's blog points to few interesting examples (mostly tailored towards DB research) including Canadian Census 1880 and synoptic cloud reports. And as for today (03/04/2012) US 1940 census records are also available to download.
Locating freely available data samples
World Bank offers quite a lot of interesting data and has been recently very active in developing nice API for it. Also, commugrate project has an interesting list available. For US health related dat
Locating freely available data samples World Bank offers quite a lot of interesting data and has been recently very active in developing nice API for it. Also, commugrate project has an interesting list available. For US health related data head for Health Indicators Warehouse. Daniel Lemire's blog points to few interesting examples (mostly tailored towards DB research) including Canadian Census 1880 and synoptic cloud reports. And as for today (03/04/2012) US 1940 census records are also available to download.
Locating freely available data samples World Bank offers quite a lot of interesting data and has been recently very active in developing nice API for it. Also, commugrate project has an interesting list available. For US health related dat
1,877
Locating freely available data samples
Gapminder has a number (430 at the last look) of datasets, which may or may not be of use to you.
Locating freely available data samples
Gapminder has a number (430 at the last look) of datasets, which may or may not be of use to you.
Locating freely available data samples Gapminder has a number (430 at the last look) of datasets, which may or may not be of use to you.
Locating freely available data samples Gapminder has a number (430 at the last look) of datasets, which may or may not be of use to you.
1,878
Locating freely available data samples
MLComp has quite a few interesting datasets, and as a bonus your algorithm will get ranked if you upload it.
Locating freely available data samples
MLComp has quite a few interesting datasets, and as a bonus your algorithm will get ranked if you upload it.
Locating freely available data samples MLComp has quite a few interesting datasets, and as a bonus your algorithm will get ranked if you upload it.
Locating freely available data samples MLComp has quite a few interesting datasets, and as a bonus your algorithm will get ranked if you upload it.
1,879
Locating freely available data samples
A good place to look is Carnegie Mellon University's Data and Story Library or DASL, which contains data files that "illustrate the use of basic statistics methods... A good example can make a lesson on a particular statistics method vivid and relevant. DASL is designed to help teachers locate and identify datafiles for teaching. We hope that DASL will also serve as an archive for datasets from the statistics literature."
Locating freely available data samples
A good place to look is Carnegie Mellon University's Data and Story Library or DASL, which contains data files that "illustrate the use of basic statistics methods... A good example can make a lesson
Locating freely available data samples A good place to look is Carnegie Mellon University's Data and Story Library or DASL, which contains data files that "illustrate the use of basic statistics methods... A good example can make a lesson on a particular statistics method vivid and relevant. DASL is designed to help teachers locate and identify datafiles for teaching. We hope that DASL will also serve as an archive for datasets from the statistics literature."
Locating freely available data samples A good place to look is Carnegie Mellon University's Data and Story Library or DASL, which contains data files that "illustrate the use of basic statistics methods... A good example can make a lesson
1,880
Locating freely available data samples
Start R and type data(). This will show all datasets in the search path. Many additional datasets are available in add-on packages. For example, there are some interesting real-world social science datasets in the AER package.
Locating freely available data samples
Start R and type data(). This will show all datasets in the search path. Many additional datasets are available in add-on packages. For example, there are some interesting real-world social science da
Locating freely available data samples Start R and type data(). This will show all datasets in the search path. Many additional datasets are available in add-on packages. For example, there are some interesting real-world social science datasets in the AER package.
Locating freely available data samples Start R and type data(). This will show all datasets in the search path. Many additional datasets are available in add-on packages. For example, there are some interesting real-world social science da
1,881
Locating freely available data samples
NIST provides a Reference Dataset archive.
Locating freely available data samples
NIST provides a Reference Dataset archive.
Locating freely available data samples NIST provides a Reference Dataset archive.
Locating freely available data samples NIST provides a Reference Dataset archive.
1,882
Locating freely available data samples
http://www.reddit.com/r/datasets and also, http://www.reddit.com/r/opendata both contain a constantly growing list of pointers to various datasets.
Locating freely available data samples
http://www.reddit.com/r/datasets and also, http://www.reddit.com/r/opendata both contain a constantly growing list of pointers to various datasets.
Locating freely available data samples http://www.reddit.com/r/datasets and also, http://www.reddit.com/r/opendata both contain a constantly growing list of pointers to various datasets.
Locating freely available data samples http://www.reddit.com/r/datasets and also, http://www.reddit.com/r/opendata both contain a constantly growing list of pointers to various datasets.
1,883
Locating freely available data samples
The Stack Exchange network now has a new site, Open Data (in beta as of March 5th, 2015), dedicated to data. It describes itself as: Open Data Stack Exchange is a question and answer site for developers and researchers interested in open data. It's built and run by you as part of the Stack Exchange network of Q&A sites. With your help, we're working together to build a library of detailed answers to every question about open data. "Open data" refers to datasets that are "freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control" (Wikipedia). However, the site seems amenable to requests for closed datasets.
Locating freely available data samples
The Stack Exchange network now has a new site, Open Data (in beta as of March 5th, 2015), dedicated to data. It describes itself as: Open Data Stack Exchange is a question and answer site for deve
Locating freely available data samples The Stack Exchange network now has a new site, Open Data (in beta as of March 5th, 2015), dedicated to data. It describes itself as: Open Data Stack Exchange is a question and answer site for developers and researchers interested in open data. It's built and run by you as part of the Stack Exchange network of Q&A sites. With your help, we're working together to build a library of detailed answers to every question about open data. "Open data" refers to datasets that are "freely available to everyone to use and republish as they wish, without restrictions from copyright, patents or other mechanisms of control" (Wikipedia). However, the site seems amenable to requests for closed datasets.
Locating freely available data samples The Stack Exchange network now has a new site, Open Data (in beta as of March 5th, 2015), dedicated to data. It describes itself as: Open Data Stack Exchange is a question and answer site for deve
1,884
Locating freely available data samples
Timetric provides a web interface to data and provide a list of the publicly available data sets they use.
Locating freely available data samples
Timetric provides a web interface to data and provide a list of the publicly available data sets they use.
Locating freely available data samples Timetric provides a web interface to data and provide a list of the publicly available data sets they use.
Locating freely available data samples Timetric provides a web interface to data and provide a list of the publicly available data sets they use.
1,885
Locating freely available data samples
Adding a couple to the list: Lots of in-depth financial data on publicly-traded companies, going back many decades: http://www.mergent.com/servius Rich information on 16+ million businesses in the US: http://compass.webservius.com Both available via a REST API and have free trial plans.
Locating freely available data samples
Adding a couple to the list: Lots of in-depth financial data on publicly-traded companies, going back many decades: http://www.mergent.com/servius Rich information on 16+ million businesses in the US
Locating freely available data samples Adding a couple to the list: Lots of in-depth financial data on publicly-traded companies, going back many decades: http://www.mergent.com/servius Rich information on 16+ million businesses in the US: http://compass.webservius.com Both available via a REST API and have free trial plans.
Locating freely available data samples Adding a couple to the list: Lots of in-depth financial data on publicly-traded companies, going back many decades: http://www.mergent.com/servius Rich information on 16+ million businesses in the US
1,886
Locating freely available data samples
This is probably the most complete list you'll find: Some Datasets Available on the Web
Locating freely available data samples
This is probably the most complete list you'll find: Some Datasets Available on the Web
Locating freely available data samples This is probably the most complete list you'll find: Some Datasets Available on the Web
Locating freely available data samples This is probably the most complete list you'll find: Some Datasets Available on the Web
1,887
Locating freely available data samples
Peter Skomoroch maintains a list of datasets at http://www.datawrangling.com/some-datasets-available-on-the-web. Many of the links provided as to places that list datasets.
Locating freely available data samples
Peter Skomoroch maintains a list of datasets at http://www.datawrangling.com/some-datasets-available-on-the-web. Many of the links provided as to places that list datasets.
Locating freely available data samples Peter Skomoroch maintains a list of datasets at http://www.datawrangling.com/some-datasets-available-on-the-web. Many of the links provided as to places that list datasets.
Locating freely available data samples Peter Skomoroch maintains a list of datasets at http://www.datawrangling.com/some-datasets-available-on-the-web. Many of the links provided as to places that list datasets.
1,888
Locating freely available data samples
I highly recommend checking out quandl.com. This is a data programmers dream. It provides one very easy API to access any of the over 10 million different data sits. You are looking for bi-modial or multi-variate data, so I would suggest checking out the various sets of population data eg this world population chart contains the sub component countries and territories that go into the total.
Locating freely available data samples
I highly recommend checking out quandl.com. This is a data programmers dream. It provides one very easy API to access any of the over 10 million different data sits. You are looking for bi-modial o
Locating freely available data samples I highly recommend checking out quandl.com. This is a data programmers dream. It provides one very easy API to access any of the over 10 million different data sits. You are looking for bi-modial or multi-variate data, so I would suggest checking out the various sets of population data eg this world population chart contains the sub component countries and territories that go into the total.
Locating freely available data samples I highly recommend checking out quandl.com. This is a data programmers dream. It provides one very easy API to access any of the over 10 million different data sits. You are looking for bi-modial o
1,889
Locating freely available data samples
Data sets from seminal book A handbook of small data sets are available here.
Locating freely available data samples
Data sets from seminal book A handbook of small data sets are available here.
Locating freely available data samples Data sets from seminal book A handbook of small data sets are available here.
Locating freely available data samples Data sets from seminal book A handbook of small data sets are available here.
1,890
Locating freely available data samples
Here's another list that might be of help.
Locating freely available data samples
Here's another list that might be of help.
Locating freely available data samples Here's another list that might be of help.
Locating freely available data samples Here's another list that might be of help.
1,891
Locating freely available data samples
Searching for an appropriate data set for my needs I have just stumbled across two sites that are pertinent to this discussion. Datacite.org which describes itself as... We are an international organisation which aims to: establish easier access to research data increase acceptance of research data as legitimate contributions in the scholarly record, and to support data archiving to permit results to be verified and re-purposed for future study. DataBib.org which describes itself as... Databib is a tool for helping people identify and locate online repositories of research data. Users and bibliographers create and curate records that describe data repositories that users can search. Thought it would be worth adding it to the list here for others. Now to find something within its links that fits my needs!
Locating freely available data samples
Searching for an appropriate data set for my needs I have just stumbled across two sites that are pertinent to this discussion. Datacite.org which describes itself as... We are an international organ
Locating freely available data samples Searching for an appropriate data set for my needs I have just stumbled across two sites that are pertinent to this discussion. Datacite.org which describes itself as... We are an international organisation which aims to: establish easier access to research data increase acceptance of research data as legitimate contributions in the scholarly record, and to support data archiving to permit results to be verified and re-purposed for future study. DataBib.org which describes itself as... Databib is a tool for helping people identify and locate online repositories of research data. Users and bibliographers create and curate records that describe data repositories that users can search. Thought it would be worth adding it to the list here for others. Now to find something within its links that fits my needs!
Locating freely available data samples Searching for an appropriate data set for my needs I have just stumbled across two sites that are pertinent to this discussion. Datacite.org which describes itself as... We are an international organ
1,892
Locating freely available data samples
Usage Over Time A very large Excel spreadsheet available for download containing data points for all online activities, with user demographics, over time. Please read Tip Sheet (below) before downloading or using this spreadsheet. http://pewinternet.org/Trend-Data/Usage-Over-Time.aspx
Locating freely available data samples
Usage Over Time A very large Excel spreadsheet available for download containing data points for all online activities, with user demographics, over time. Please read Tip Sheet (below) before downloa
Locating freely available data samples Usage Over Time A very large Excel spreadsheet available for download containing data points for all online activities, with user demographics, over time. Please read Tip Sheet (below) before downloading or using this spreadsheet. http://pewinternet.org/Trend-Data/Usage-Over-Time.aspx
Locating freely available data samples Usage Over Time A very large Excel spreadsheet available for download containing data points for all online activities, with user demographics, over time. Please read Tip Sheet (below) before downloa
1,893
Locating freely available data samples
http://www.ckan.net has a number of datasets too. http://www.biotorrents.net/browse.php is also starting to have quite a large amount of BIG datasets.
Locating freely available data samples
http://www.ckan.net has a number of datasets too. http://www.biotorrents.net/browse.php is also starting to have quite a large amount of BIG datasets.
Locating freely available data samples http://www.ckan.net has a number of datasets too. http://www.biotorrents.net/browse.php is also starting to have quite a large amount of BIG datasets.
Locating freely available data samples http://www.ckan.net has a number of datasets too. http://www.biotorrents.net/browse.php is also starting to have quite a large amount of BIG datasets.
1,894
Locating freely available data samples
Here's a list of quantitative social science data: https://f.briatte.org/teaching/quanti I update it every 3-4 months or so. At the very end, I cite the "Data is Plural" newsletter, which has tons of interesting stuff. It's my favourite newsletter ever.
Locating freely available data samples
Here's a list of quantitative social science data: https://f.briatte.org/teaching/quanti I update it every 3-4 months or so. At the very end, I cite the "Data is Plural" newsletter, which has tons of
Locating freely available data samples Here's a list of quantitative social science data: https://f.briatte.org/teaching/quanti I update it every 3-4 months or so. At the very end, I cite the "Data is Plural" newsletter, which has tons of interesting stuff. It's my favourite newsletter ever.
Locating freely available data samples Here's a list of quantitative social science data: https://f.briatte.org/teaching/quanti I update it every 3-4 months or so. At the very end, I cite the "Data is Plural" newsletter, which has tons of
1,895
Locating freely available data samples
SODA POP at Penn State; Simple Online Data Archive for POPulation studies.
Locating freely available data samples
SODA POP at Penn State; Simple Online Data Archive for POPulation studies.
Locating freely available data samples SODA POP at Penn State; Simple Online Data Archive for POPulation studies.
Locating freely available data samples SODA POP at Penn State; Simple Online Data Archive for POPulation studies.
1,896
Locating freely available data samples
I'm gonna go ahead and bump an old topic because I just found this mother lode: http://vincentarelbundock.github.io/Rdatasets/
Locating freely available data samples
I'm gonna go ahead and bump an old topic because I just found this mother lode: http://vincentarelbundock.github.io/Rdatasets/
Locating freely available data samples I'm gonna go ahead and bump an old topic because I just found this mother lode: http://vincentarelbundock.github.io/Rdatasets/
Locating freely available data samples I'm gonna go ahead and bump an old topic because I just found this mother lode: http://vincentarelbundock.github.io/Rdatasets/
1,897
Locating freely available data samples
Singapore announces Open Data initiative. Check out data.gov.sg similar to data.gov in the US.
Locating freely available data samples
Singapore announces Open Data initiative. Check out data.gov.sg similar to data.gov in the US.
Locating freely available data samples Singapore announces Open Data initiative. Check out data.gov.sg similar to data.gov in the US.
Locating freely available data samples Singapore announces Open Data initiative. Check out data.gov.sg similar to data.gov in the US.
1,898
What's the difference between correlation and simple linear regression?
What's the difference between the correlation between $X$ and $Y$ and a linear regression predicting $Y$ from $X$? First, some similarities: the standardised regression coefficient is the same as Pearson's correlation coefficient The square of Pearson's correlation coefficient is the same as the $R^2$ in simple linear regression The sign of the unstandardized coefficient (i.e., whether it is positive or negative) will the same as the sign of the correlation coefficient. Neither simple linear regression nor correlation answer questions of causality directly. This point is important, because I've met people that think that simple regression can magically allow an inference that $X$ causes $Y$. Standard tests of the null hypothesis (i.e., "correlation = 0" or, equivalently, "slope = 0" for the regression in either order), such as carried out by lm and cor.test in R, will yield identical p-values. Second, some differences: The regression equation (i.e., $a + bX$) can be used to make predictions on $Y$ based on values of $X$ While correlation typically refers to the linear relationship, it can refer to other forms of dependence, such as polynomial or truly nonlinear relationships While correlation typically refers to Pearson's correlation coefficient, there are other types of correlation, such as Spearman's. The correlation between X and Y is the same as the correlation between Y and X. In contrast, the unstandardized coefficient typically changes when moving from a model predicting Y from X to a model predicting X from Y.
What's the difference between correlation and simple linear regression?
What's the difference between the correlation between $X$ and $Y$ and a linear regression predicting $Y$ from $X$? First, some similarities: the standardised regression coefficient is the same as Pea
What's the difference between correlation and simple linear regression? What's the difference between the correlation between $X$ and $Y$ and a linear regression predicting $Y$ from $X$? First, some similarities: the standardised regression coefficient is the same as Pearson's correlation coefficient The square of Pearson's correlation coefficient is the same as the $R^2$ in simple linear regression The sign of the unstandardized coefficient (i.e., whether it is positive or negative) will the same as the sign of the correlation coefficient. Neither simple linear regression nor correlation answer questions of causality directly. This point is important, because I've met people that think that simple regression can magically allow an inference that $X$ causes $Y$. Standard tests of the null hypothesis (i.e., "correlation = 0" or, equivalently, "slope = 0" for the regression in either order), such as carried out by lm and cor.test in R, will yield identical p-values. Second, some differences: The regression equation (i.e., $a + bX$) can be used to make predictions on $Y$ based on values of $X$ While correlation typically refers to the linear relationship, it can refer to other forms of dependence, such as polynomial or truly nonlinear relationships While correlation typically refers to Pearson's correlation coefficient, there are other types of correlation, such as Spearman's. The correlation between X and Y is the same as the correlation between Y and X. In contrast, the unstandardized coefficient typically changes when moving from a model predicting Y from X to a model predicting X from Y.
What's the difference between correlation and simple linear regression? What's the difference between the correlation between $X$ and $Y$ and a linear regression predicting $Y$ from $X$? First, some similarities: the standardised regression coefficient is the same as Pea
1,899
What's the difference between correlation and simple linear regression?
Here is an answer I posted on the graphpad.com website: Correlation and linear regression are not the same. Consider these differences: Correlation quantifies the degree to which two variables are related. Correlation does not fit a line through the data. With correlation you don't have to think about cause and effect. You simply quantify how well two variables relate to each other. With regression, you do have to think about cause and effect as the regression line is determined as the best way to predict Y from X. With correlation, it doesn't matter which of the two variables you call "X" and which you call "Y". You'll get the same correlation coefficient if you swap the two. With linear regression, the decision of which variable you call "X" and which you call "Y" matters a lot, as you'll get a different best-fit line if you swap the two. The line that best predicts Y from X is not the same as the line that predicts X from Y (unless you have perfect data with no scatter.) Correlation is almost always used when you measure both variables. It rarely is appropriate when one variable is something you experimentally manipulate. With linear regression, the X variable is usually something you experimentally manipulate (time, concentration...) and the Y variable is something you measure.
What's the difference between correlation and simple linear regression?
Here is an answer I posted on the graphpad.com website: Correlation and linear regression are not the same. Consider these differences: Correlation quantifies the degree to which two variables are re
What's the difference between correlation and simple linear regression? Here is an answer I posted on the graphpad.com website: Correlation and linear regression are not the same. Consider these differences: Correlation quantifies the degree to which two variables are related. Correlation does not fit a line through the data. With correlation you don't have to think about cause and effect. You simply quantify how well two variables relate to each other. With regression, you do have to think about cause and effect as the regression line is determined as the best way to predict Y from X. With correlation, it doesn't matter which of the two variables you call "X" and which you call "Y". You'll get the same correlation coefficient if you swap the two. With linear regression, the decision of which variable you call "X" and which you call "Y" matters a lot, as you'll get a different best-fit line if you swap the two. The line that best predicts Y from X is not the same as the line that predicts X from Y (unless you have perfect data with no scatter.) Correlation is almost always used when you measure both variables. It rarely is appropriate when one variable is something you experimentally manipulate. With linear regression, the X variable is usually something you experimentally manipulate (time, concentration...) and the Y variable is something you measure.
What's the difference between correlation and simple linear regression? Here is an answer I posted on the graphpad.com website: Correlation and linear regression are not the same. Consider these differences: Correlation quantifies the degree to which two variables are re
1,900
What's the difference between correlation and simple linear regression?
In the single predictor case of linear regression, the standardized slope has the same value as the correlation coefficient. The advantage of the linear regression is that the relationship can be described in such a way that you can predict (based on the relationship between the two variables) the score on the predicted variable given any particular value of the predictor variable. In particular one piece of information a linear regression gives you that a correlation does not is the intercept, the value on the predicted variable when the predictor is 0. In short - they produce identical results computationally, but there are more elements which are capable of interpretation in the simple linear regression. If you are interested in simply characterizing the magnitude of the relationship between two variables, use correlation - if you are interested in predicting or explaining your results in terms of particular values you probably want regression.
What's the difference between correlation and simple linear regression?
In the single predictor case of linear regression, the standardized slope has the same value as the correlation coefficient. The advantage of the linear regression is that the relationship can be des
What's the difference between correlation and simple linear regression? In the single predictor case of linear regression, the standardized slope has the same value as the correlation coefficient. The advantage of the linear regression is that the relationship can be described in such a way that you can predict (based on the relationship between the two variables) the score on the predicted variable given any particular value of the predictor variable. In particular one piece of information a linear regression gives you that a correlation does not is the intercept, the value on the predicted variable when the predictor is 0. In short - they produce identical results computationally, but there are more elements which are capable of interpretation in the simple linear regression. If you are interested in simply characterizing the magnitude of the relationship between two variables, use correlation - if you are interested in predicting or explaining your results in terms of particular values you probably want regression.
What's the difference between correlation and simple linear regression? In the single predictor case of linear regression, the standardized slope has the same value as the correlation coefficient. The advantage of the linear regression is that the relationship can be des