idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
3,701
What is the difference between estimation and prediction?
"Prediction" and "estimation" indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem. An estimator uses data to guess at a parameter while a predictor uses the data to guess at some random value that is not part of the dataset. For those who are unfamiliar with what "parameter" and "random value" mean in statistics, the following provides a detailed explanation. In this standard model, data are assumed to constitute a (possibly multivariate) observation $\mathbf{x}$ of a random variable $X$ whose distribution is known only to lie within a definite set of possible distributions, the "states of nature". An estimator $t$ is a mathematical procedure that assigns to each possible value of $\mathbf{x}$ some property $t(\mathbf{x})$ of a state of nature $\theta$, such as its mean $\mu(\theta)$. Thus an estimate is a guess about the true state of nature. We can tell how good an estimate is by comparing $t(\mathbf{x})$ to $\mu(\theta)$. A predictor $p(\mathbf{x})$ concerns the independent observation of another random variable $Z$ whose distribution is related to the true state of nature. A prediction is a guess about another random value. We can tell how good a particular prediction is only by comparing $p(\mathbf{x})$ to the value realized by $Z$. We hope that on average the agreement will be good (in the sense of averaging over all possible outcomes $\mathbf{x}$ and simultaneously over all possible values of $Z$). Ordinary least squares affords the standard example. The data consist of pairs $(x_i,y_i)$ associating values $y_i$ of the dependent variable to values $x_i$ of the independent variable. The state of nature is specified by three parameters $\alpha$, $\beta$, and $\sigma$: it says that each $y_i$ is like an independent draw from a normal distribution with mean $\alpha + \beta x_i$ and standard deviation $\sigma$. $\alpha$, $\beta$, and $\sigma$ are parameters (numbers) believed to be fixed and unvarying. Interest focuses on $\alpha$ (the intercept) and $\beta$ (the slope). The OLS estimate, written $(\hat{\alpha}, \hat{\beta})$, is good in the sense that $\hat{\alpha}$ tends to be close to $\alpha$ and $\hat{\beta}$ tends to be close to $\beta$, no matter what the true (but unknown) values of $\alpha$ and $\beta$ might be. OLS prediction consists of observing a new value $Z = Y(x)$ of the dependent variable associated with some value $x$ of the independent variable. $x$ might or might not be among the $x_i$ in the dataset; that is immaterial. One intuitively good prediction is that this new value is likely to be close to $\hat{\alpha} + \hat{\beta}x$. Better predictions say just how close the new value might be (they are called prediction intervals). They account for the fact that $\hat{\alpha}$ and $\hat{\beta}$ are uncertain (because they depend mathematically on the random values $(y_i)$), that $\sigma$ is not known for certain (and therefore has to be estimated), as well as the assumption that $Y(x)$ has a normal distribution with standard deviation $\sigma$ and mean $\alpha + \beta x$ (note the absence of any hats!). Note especially that this prediction has two separate sources of uncertainty: uncertainty in the data $(x_i,y_i)$ leads to uncertainty in the estimated slope, intercept, and residual standard deviation ($\sigma$); in addition, there is uncertainty in just what value of $Y(x)$ will occur. This additional uncertainty--because $Y(x)$ is random--characterizes predictions. A prediction may look like an estimate (after all, $\hat{\alpha} + \hat{\beta}x$ estimates $\alpha+\beta x$ :-) and may even have the very same mathematical formula ($p(\mathbf{x})$ can sometimes be the same as $t(\mathbf{x})$), but it will come with a greater amount of uncertainty than the estimate. Here, then, in the example of OLS, we see the distinction clearly: an estimate guesses at the parameters (which are fixed but unknown numbers), while a prediction guesses at the value of a random quantity. The source of potential confusion is that the prediction usually builds on the estimated parameters and might even have the same formula as an estimator. In practice, you can distinguish estimators from predictors in two ways: purpose: an estimator seeks to know a property of the true state of nature, while a prediction seeks to guess the outcome of a random variable; and uncertainty: a predictor usually has larger uncertainty than a related estimator, due to the added uncertainty in the outcome of that random variable. Well-documented and -described predictors therefore usually come with uncertainty bands--prediction intervals--that are wider than the uncertainty bands of estimators, known as confidence intervals. A characteristic feature of prediction intervals is that they can (hypothetically) shrink as the dataset grows, but they will not shrink to zero width--the uncertainty in the random outcome is "irreducible"--whereas the widths of confidence intervals will tend to shrink to zero, corresponding to our intuition that the precision of an estimate can become arbitrarily good with sufficient amounts of data. In applying this to assessing potential investment loss, first consider the purpose: do you want to know how much you might actually lose on this investment (or this particular basket of investments) during a given period, or are you really just guessing what is the expected loss (over a large universe of investments, perhaps)? The former is a prediction, the latter an estimate. Then consider the uncertainty. How would your answer change if you had nearly infinite resources to gather data and perform analyses? If it would become very precise, you are probably estimating the expected return on the investment, whereas if you remain highly uncertain about the answer, you are making a prediction. Thus, if you're still not sure which animal you're dealing with, ask this of your estimator/predictor: how wrong is it likely to be and why? By means of both criteria (1) and (2) you will know what you have.
What is the difference between estimation and prediction?
"Prediction" and "estimation" indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard mode
What is the difference between estimation and prediction? "Prediction" and "estimation" indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard model of a statistical problem. An estimator uses data to guess at a parameter while a predictor uses the data to guess at some random value that is not part of the dataset. For those who are unfamiliar with what "parameter" and "random value" mean in statistics, the following provides a detailed explanation. In this standard model, data are assumed to constitute a (possibly multivariate) observation $\mathbf{x}$ of a random variable $X$ whose distribution is known only to lie within a definite set of possible distributions, the "states of nature". An estimator $t$ is a mathematical procedure that assigns to each possible value of $\mathbf{x}$ some property $t(\mathbf{x})$ of a state of nature $\theta$, such as its mean $\mu(\theta)$. Thus an estimate is a guess about the true state of nature. We can tell how good an estimate is by comparing $t(\mathbf{x})$ to $\mu(\theta)$. A predictor $p(\mathbf{x})$ concerns the independent observation of another random variable $Z$ whose distribution is related to the true state of nature. A prediction is a guess about another random value. We can tell how good a particular prediction is only by comparing $p(\mathbf{x})$ to the value realized by $Z$. We hope that on average the agreement will be good (in the sense of averaging over all possible outcomes $\mathbf{x}$ and simultaneously over all possible values of $Z$). Ordinary least squares affords the standard example. The data consist of pairs $(x_i,y_i)$ associating values $y_i$ of the dependent variable to values $x_i$ of the independent variable. The state of nature is specified by three parameters $\alpha$, $\beta$, and $\sigma$: it says that each $y_i$ is like an independent draw from a normal distribution with mean $\alpha + \beta x_i$ and standard deviation $\sigma$. $\alpha$, $\beta$, and $\sigma$ are parameters (numbers) believed to be fixed and unvarying. Interest focuses on $\alpha$ (the intercept) and $\beta$ (the slope). The OLS estimate, written $(\hat{\alpha}, \hat{\beta})$, is good in the sense that $\hat{\alpha}$ tends to be close to $\alpha$ and $\hat{\beta}$ tends to be close to $\beta$, no matter what the true (but unknown) values of $\alpha$ and $\beta$ might be. OLS prediction consists of observing a new value $Z = Y(x)$ of the dependent variable associated with some value $x$ of the independent variable. $x$ might or might not be among the $x_i$ in the dataset; that is immaterial. One intuitively good prediction is that this new value is likely to be close to $\hat{\alpha} + \hat{\beta}x$. Better predictions say just how close the new value might be (they are called prediction intervals). They account for the fact that $\hat{\alpha}$ and $\hat{\beta}$ are uncertain (because they depend mathematically on the random values $(y_i)$), that $\sigma$ is not known for certain (and therefore has to be estimated), as well as the assumption that $Y(x)$ has a normal distribution with standard deviation $\sigma$ and mean $\alpha + \beta x$ (note the absence of any hats!). Note especially that this prediction has two separate sources of uncertainty: uncertainty in the data $(x_i,y_i)$ leads to uncertainty in the estimated slope, intercept, and residual standard deviation ($\sigma$); in addition, there is uncertainty in just what value of $Y(x)$ will occur. This additional uncertainty--because $Y(x)$ is random--characterizes predictions. A prediction may look like an estimate (after all, $\hat{\alpha} + \hat{\beta}x$ estimates $\alpha+\beta x$ :-) and may even have the very same mathematical formula ($p(\mathbf{x})$ can sometimes be the same as $t(\mathbf{x})$), but it will come with a greater amount of uncertainty than the estimate. Here, then, in the example of OLS, we see the distinction clearly: an estimate guesses at the parameters (which are fixed but unknown numbers), while a prediction guesses at the value of a random quantity. The source of potential confusion is that the prediction usually builds on the estimated parameters and might even have the same formula as an estimator. In practice, you can distinguish estimators from predictors in two ways: purpose: an estimator seeks to know a property of the true state of nature, while a prediction seeks to guess the outcome of a random variable; and uncertainty: a predictor usually has larger uncertainty than a related estimator, due to the added uncertainty in the outcome of that random variable. Well-documented and -described predictors therefore usually come with uncertainty bands--prediction intervals--that are wider than the uncertainty bands of estimators, known as confidence intervals. A characteristic feature of prediction intervals is that they can (hypothetically) shrink as the dataset grows, but they will not shrink to zero width--the uncertainty in the random outcome is "irreducible"--whereas the widths of confidence intervals will tend to shrink to zero, corresponding to our intuition that the precision of an estimate can become arbitrarily good with sufficient amounts of data. In applying this to assessing potential investment loss, first consider the purpose: do you want to know how much you might actually lose on this investment (or this particular basket of investments) during a given period, or are you really just guessing what is the expected loss (over a large universe of investments, perhaps)? The former is a prediction, the latter an estimate. Then consider the uncertainty. How would your answer change if you had nearly infinite resources to gather data and perform analyses? If it would become very precise, you are probably estimating the expected return on the investment, whereas if you remain highly uncertain about the answer, you are making a prediction. Thus, if you're still not sure which animal you're dealing with, ask this of your estimator/predictor: how wrong is it likely to be and why? By means of both criteria (1) and (2) you will know what you have.
What is the difference between estimation and prediction? "Prediction" and "estimation" indeed are sometimes used interchangeably in non-technical writing and they seem to function similarly, but there is a sharp distinction between them in the standard mode
3,702
What is the difference between estimation and prediction?
Estimation is always for unknown parameter whereas prediction is for random variable.
What is the difference between estimation and prediction?
Estimation is always for unknown parameter whereas prediction is for random variable.
What is the difference between estimation and prediction? Estimation is always for unknown parameter whereas prediction is for random variable.
What is the difference between estimation and prediction? Estimation is always for unknown parameter whereas prediction is for random variable.
3,703
What is the difference between estimation and prediction?
There is no difference in the models. There is indeed a (slight) difference in the action conducted. Estimation is the calibration of your probabilistic model using data ("learning" in the AI terminology). Prediction is the "guessing" of a future observation. Assuming this "guessing" is based on past data- this might be a case of estimation; such as the prediction of the height of the next person you are about to meet using an estimate of the mean height in the population. Note though, that prediction is not always an instance of estimation. The gender of the next person you are about to meet, is not a parameter of the population in the classical sense; Predicting the gender, might require some estimation, but it will require some more... In the value-at-risk case, the prediction and estimation coincide since your predicted loss, is the estimated expectancy of the loss.
What is the difference between estimation and prediction?
There is no difference in the models. There is indeed a (slight) difference in the action conducted. Estimation is the calibration of your probabilistic model using data ("learning" in the AI terminol
What is the difference between estimation and prediction? There is no difference in the models. There is indeed a (slight) difference in the action conducted. Estimation is the calibration of your probabilistic model using data ("learning" in the AI terminology). Prediction is the "guessing" of a future observation. Assuming this "guessing" is based on past data- this might be a case of estimation; such as the prediction of the height of the next person you are about to meet using an estimate of the mean height in the population. Note though, that prediction is not always an instance of estimation. The gender of the next person you are about to meet, is not a parameter of the population in the classical sense; Predicting the gender, might require some estimation, but it will require some more... In the value-at-risk case, the prediction and estimation coincide since your predicted loss, is the estimated expectancy of the loss.
What is the difference between estimation and prediction? There is no difference in the models. There is indeed a (slight) difference in the action conducted. Estimation is the calibration of your probabilistic model using data ("learning" in the AI terminol
3,704
What is the difference between estimation and prediction?
Usually "estimation" is reserved for parameters and the "predicition" is for values. However, sometimes the distinction gets blurred, e.g. you may have seen something like "estimate the value tomorrow" instead of "predict the value tomorrow." The value-at-risk (VaR) is an interesting case. VaR is not a parameter, but we don't say "predict VaR." We say "estimate VaR." Why? The reason in that VaR is not a random quantity IF you know the distribution, AND you need to know the distribution to calculate VaR. So, you if you're using parametric VaR approach, then you first estimate the parameters of the distribution then calculate VaR. If you're using the nonparametric VaR, then you directly estimate VaR similar to how you would estimate parameters. In this regard it's similar to quantile. On the other hand, the loss amount is a random value. Hence, if you're asked to forecast losses, you'd be predicting them not estimating. Again, sometimes we say "estimate" loss. So, the line is blurred, as I wrote earlier.
What is the difference between estimation and prediction?
Usually "estimation" is reserved for parameters and the "predicition" is for values. However, sometimes the distinction gets blurred, e.g. you may have seen something like "estimate the value tomorrow
What is the difference between estimation and prediction? Usually "estimation" is reserved for parameters and the "predicition" is for values. However, sometimes the distinction gets blurred, e.g. you may have seen something like "estimate the value tomorrow" instead of "predict the value tomorrow." The value-at-risk (VaR) is an interesting case. VaR is not a parameter, but we don't say "predict VaR." We say "estimate VaR." Why? The reason in that VaR is not a random quantity IF you know the distribution, AND you need to know the distribution to calculate VaR. So, you if you're using parametric VaR approach, then you first estimate the parameters of the distribution then calculate VaR. If you're using the nonparametric VaR, then you directly estimate VaR similar to how you would estimate parameters. In this regard it's similar to quantile. On the other hand, the loss amount is a random value. Hence, if you're asked to forecast losses, you'd be predicting them not estimating. Again, sometimes we say "estimate" loss. So, the line is blurred, as I wrote earlier.
What is the difference between estimation and prediction? Usually "estimation" is reserved for parameters and the "predicition" is for values. However, sometimes the distinction gets blurred, e.g. you may have seen something like "estimate the value tomorrow
3,705
What is the difference between estimation and prediction?
Prediction is the use of sample regression function to estimate a value for the dependent variable conditioned on some an unobserved values of the independent variable. Estimation is the process or technique of calculating an unknown parameter or quantity of the population.
What is the difference between estimation and prediction?
Prediction is the use of sample regression function to estimate a value for the dependent variable conditioned on some an unobserved values of the independent variable. Estimation is the process or te
What is the difference between estimation and prediction? Prediction is the use of sample regression function to estimate a value for the dependent variable conditioned on some an unobserved values of the independent variable. Estimation is the process or technique of calculating an unknown parameter or quantity of the population.
What is the difference between estimation and prediction? Prediction is the use of sample regression function to estimate a value for the dependent variable conditioned on some an unobserved values of the independent variable. Estimation is the process or te
3,706
What is the difference between estimation and prediction?
I find below definitions more explanatory: Estimation is the calculated approximation of a result. This result might be a forecast but not necessarily. For example, I can estimate that the number of cars on the Golden Gate Bridge at 5 PM yesterday was 900 by assuming the three lanes going toward Marin were at capacity, each car takes 30 feet of space, and the bridge is 9000 feet long (9000 / 30 x 3 = 900). Extrapolation is estimating the value of a variable outside a known range of values by assuming that the estimated value follows some pattern from the known ones. The simplest and most popular form of extrapolation is estimating a linear trend based on the known data. Alternatives to linear extrapolation include polynomial and conical extrapolation. Like estimation, extrapolation can be used for forecasting but it isn't limited to forecasting. Prediction is simply saying something about the future. Predictions are usually focused on outcomes and not the pathway to those outcomes. For example, I could predict that by 2050 all vehicles will be powered with electric motors without explaining how we get from low adoption in 2011 to full adoption by 2050. As you can see from the previous example, predictions are not necessarily based on data. Forecasting is the process of making a forecast or prediction. The terms forecast and prediction are often used interchangeably but sometimes forecasts are distinguished from predictions in that forecasts often provide explanations of the pathways to an outcome. For example, an electric vehicle adoption forecast might include the pathway to full electric vehicle adoption following an S-shaped adoption pattern where few cars are electric before 2025, an inflection point occurs at 2030 with rapid adoption, and the majority of cars are electric after 2040. Estimation, extrapolation, prediction, and forecasting are not mutually exhaustive and collectively exhaustive terms. Good long-term forecasts for complex problems often need to use techniques other than extrapolation in order to produce plausible results. Forecasts and predictions can also occur without any kind of calculated estimations. see links definitions1 definitions2
What is the difference between estimation and prediction?
I find below definitions more explanatory: Estimation is the calculated approximation of a result. This result might be a forecast but not necessarily. For example, I can estimate that the number of c
What is the difference between estimation and prediction? I find below definitions more explanatory: Estimation is the calculated approximation of a result. This result might be a forecast but not necessarily. For example, I can estimate that the number of cars on the Golden Gate Bridge at 5 PM yesterday was 900 by assuming the three lanes going toward Marin were at capacity, each car takes 30 feet of space, and the bridge is 9000 feet long (9000 / 30 x 3 = 900). Extrapolation is estimating the value of a variable outside a known range of values by assuming that the estimated value follows some pattern from the known ones. The simplest and most popular form of extrapolation is estimating a linear trend based on the known data. Alternatives to linear extrapolation include polynomial and conical extrapolation. Like estimation, extrapolation can be used for forecasting but it isn't limited to forecasting. Prediction is simply saying something about the future. Predictions are usually focused on outcomes and not the pathway to those outcomes. For example, I could predict that by 2050 all vehicles will be powered with electric motors without explaining how we get from low adoption in 2011 to full adoption by 2050. As you can see from the previous example, predictions are not necessarily based on data. Forecasting is the process of making a forecast or prediction. The terms forecast and prediction are often used interchangeably but sometimes forecasts are distinguished from predictions in that forecasts often provide explanations of the pathways to an outcome. For example, an electric vehicle adoption forecast might include the pathway to full electric vehicle adoption following an S-shaped adoption pattern where few cars are electric before 2025, an inflection point occurs at 2030 with rapid adoption, and the majority of cars are electric after 2040. Estimation, extrapolation, prediction, and forecasting are not mutually exhaustive and collectively exhaustive terms. Good long-term forecasts for complex problems often need to use techniques other than extrapolation in order to produce plausible results. Forecasts and predictions can also occur without any kind of calculated estimations. see links definitions1 definitions2
What is the difference between estimation and prediction? I find below definitions more explanatory: Estimation is the calculated approximation of a result. This result might be a forecast but not necessarily. For example, I can estimate that the number of c
3,707
Binary classification with strongly unbalanced classes
Both hxd1011 and Frank are right (+1). Essentially resampling and/or cost-sensitive learning are the two main ways of getting around the problem of imbalanced data; third is to use kernel methods that sometimes might be less effected by the class imbalance. Let me stress that there is no silver-bullet solution. By definition you have one class that is represented inadequately in your samples. Having said the above I believe that you will find the algorithms SMOTE and ROSE very helpful. SMOTE effectively uses a $k$-nearest neighbours approach to exclude members of the majority class while in a similar way creating synthetic examples of a minority class. ROSE tries to create estimates of the underlying distributions of the two classes using a smoothed bootstrap approach and sample them for synthetic examples. Both are readily available in R, SMOTE in the package DMwR and ROSE in the package with the same name. Both SMOTE and ROSE result in a training dataset that is smaller than the original one. I would probably argue that a better (or less bad) metric for the case of imbalanced data is using Cohen's $k$ and/or Receiver operating characteristic's Area under the curve. Cohen's kappa directly controls for the expected accuracy, AUC as it is a function of sensitivity and specificity, the curve is insensitive to disparities in the class proportions. Again, notice that these are just metrics that should be used with a large grain of salt. You should ideally adapt them to your specific problem taking account of the gains and costs correct and wrong classifications convey in your case. I have found that looking at lift-curves is actually rather informative for this matter. Irrespective of your metric you should try to use a separate test to assess the performance of your algorithm; exactly because of the class imbalanced over-fitting is even more likely so out-of-sample testing is crucial. Probably the most popular recent paper on the matter is Learning from Imbalanced Data by He and Garcia. It gives a very nice overview of the points raised by myself and in other answers. In addition I believe that the walk-through on Subsampling For Class Imbalances, presented by Max Kuhn as part of the caret package is an excellent resource to get a structure example of how under-/over-sampling as well as synthetic data creation can measure against each other.
Binary classification with strongly unbalanced classes
Both hxd1011 and Frank are right (+1). Essentially resampling and/or cost-sensitive learning are the two main ways of getting around the problem of imbalanced data; third is to use kernel methods tha
Binary classification with strongly unbalanced classes Both hxd1011 and Frank are right (+1). Essentially resampling and/or cost-sensitive learning are the two main ways of getting around the problem of imbalanced data; third is to use kernel methods that sometimes might be less effected by the class imbalance. Let me stress that there is no silver-bullet solution. By definition you have one class that is represented inadequately in your samples. Having said the above I believe that you will find the algorithms SMOTE and ROSE very helpful. SMOTE effectively uses a $k$-nearest neighbours approach to exclude members of the majority class while in a similar way creating synthetic examples of a minority class. ROSE tries to create estimates of the underlying distributions of the two classes using a smoothed bootstrap approach and sample them for synthetic examples. Both are readily available in R, SMOTE in the package DMwR and ROSE in the package with the same name. Both SMOTE and ROSE result in a training dataset that is smaller than the original one. I would probably argue that a better (or less bad) metric for the case of imbalanced data is using Cohen's $k$ and/or Receiver operating characteristic's Area under the curve. Cohen's kappa directly controls for the expected accuracy, AUC as it is a function of sensitivity and specificity, the curve is insensitive to disparities in the class proportions. Again, notice that these are just metrics that should be used with a large grain of salt. You should ideally adapt them to your specific problem taking account of the gains and costs correct and wrong classifications convey in your case. I have found that looking at lift-curves is actually rather informative for this matter. Irrespective of your metric you should try to use a separate test to assess the performance of your algorithm; exactly because of the class imbalanced over-fitting is even more likely so out-of-sample testing is crucial. Probably the most popular recent paper on the matter is Learning from Imbalanced Data by He and Garcia. It gives a very nice overview of the points raised by myself and in other answers. In addition I believe that the walk-through on Subsampling For Class Imbalances, presented by Max Kuhn as part of the caret package is an excellent resource to get a structure example of how under-/over-sampling as well as synthetic data creation can measure against each other.
Binary classification with strongly unbalanced classes Both hxd1011 and Frank are right (+1). Essentially resampling and/or cost-sensitive learning are the two main ways of getting around the problem of imbalanced data; third is to use kernel methods tha
3,708
Binary classification with strongly unbalanced classes
First, the evaluation metric for imbalanced data would not be accuracy. Suppose you are doing fraud detection, that 99.9% of your data is not fraud. We can easy make a dummy model that have 99.9% accuracy. (just predict all data non-fraud). You want to change your evaluation metric from accuracy to something else, such as F1 score or precision and recall. In the second link I provided. there are details and intuitions on why precision recall will work. For highly imbalanced data, building a model can be very challenging. You may play with weighted loss function or modeling one class only. such as one class SVM or fit a multi-variate Gaussian (As the link I provided before.)
Binary classification with strongly unbalanced classes
First, the evaluation metric for imbalanced data would not be accuracy. Suppose you are doing fraud detection, that 99.9% of your data is not fraud. We can easy make a dummy model that have 99.9% accu
Binary classification with strongly unbalanced classes First, the evaluation metric for imbalanced data would not be accuracy. Suppose you are doing fraud detection, that 99.9% of your data is not fraud. We can easy make a dummy model that have 99.9% accuracy. (just predict all data non-fraud). You want to change your evaluation metric from accuracy to something else, such as F1 score or precision and recall. In the second link I provided. there are details and intuitions on why precision recall will work. For highly imbalanced data, building a model can be very challenging. You may play with weighted loss function or modeling one class only. such as one class SVM or fit a multi-variate Gaussian (As the link I provided before.)
Binary classification with strongly unbalanced classes First, the evaluation metric for imbalanced data would not be accuracy. Suppose you are doing fraud detection, that 99.9% of your data is not fraud. We can easy make a dummy model that have 99.9% accu
3,709
Binary classification with strongly unbalanced classes
Class imbalance issues can be addressed with either cost-sensitive learning or resampling. See advantages and disadvantages of cost-sensitive learning vs. sampling, copypasted below: {1} gives a list of advantages and disadvantages of cost-sensitive learning vs. sampling: 2.2 Sampling Oversampling and undersampling can be used to alter the class distribution of the training data and both methods have been used to deal with class imbalance [1, 2, 3, 6, 10, 11]. The reason that altering the class distribution of the training data aids learning with highly-skewed data sets is that it effectively imposes non-uniform misclassification costs. For example, if one alters the class distribution of the training set so that the ratio of positive to negative examples goes from 1:1 to 2:1, then one has effectively assigned a misclassification cost ratio of 2:1. This equivalency between altering the class distribution of the training data and altering the misclassification cost ratio is well known and was formally described by Elkan [9]. There are known disadvantages associated with the use of sampling to implement cost-sensitive learning. The disadvantage with undersampling is that it discards potentially useful data. The main disadvantage with oversampling, from our perspective, is that by making exact copies of existing examples, it makes overfitting likely. In fact, with oversampling it is quite common for a learner to generate a classification rule to cover a single, replicated, example. A second disadvantage of oversampling is that it increases the number of training examples, thus increasing the learning time. 2.3 Why Use Sampling? Given the disadvantages with sampling, it is worth asking why anyone would use it rather than a cost-sensitive learning algorithm for dealing with data with a skewed class distribution and non-uniform misclassification costs. There are several reasons for this. The most obvious reason is there are not cost-sensitive implementations of all learning algorithms and therefore a wrapper-based approach using sampling is the only option. While this is certainly less true today than in the past, many learning algorithms (e.g., C4.5) still do not directly handle costs in the learning process. A second reason for using sampling is that many highly skewed data sets are enormous and the size of the training set must be reduced in order for learning to be feasible. In this case, undersampling seems to be a reasonable, and valid, strategy. In this paper we do not consider the need to reduce the training set size. We would point out, however, that if one needs to discard some training data, it still might be beneficial to discard some of the majority class examples in order to reduce the training set size to the required size, and then also employ a cost-sensitive learning algorithm, so that the amount of discarded training data is minimized. A final reason that may have contributed to the use of sampling rather than a cost-sensitive learning algorithm is that misclassification costs are often unknown. However, this is not a valid reason for using sampling over a costsensitive learning algorithm, since the analogous issue arises with sampling—what should the class distribution of the final training data be? If this cost information is not known, a measure such as the area under the ROC curve could be used to measure classifier performance and both approaches could then empirically determine the proper cost ratio/class distribution. They also did a series of experiments, which was inconclusive: Based on the results from all of the data sets, there is no definitive winner between cost-sensitive learning, oversampling and undersampling They then try to understand which criteria in the datasets may hint at which technique is better fitted. They also remark that SMOTE may bring some enhancements: There are a variety of enhancements that people have made to improve the effectiveness of sampling. Some of these enhancements include introducing new “synthetic” examples when oversampling [5 -> SMOTE], deleting less useful majority- class examples when undersampling [11] and using multiple sub-samples when undersampling such than each example is used in at least one sub-sample [3]. While these techniques have been compared to oversampling and undersampling, they generally have not been compared to cost-sensitive learning algorithms. This would be worth studying in the future. {1} Weiss, Gary M., Kate McCarthy, and Bibi Zabar. "Cost-sensitive learning vs. sampling: Which is best for handling unbalanced classes with unequal error costs?." DMIN 7 (2007): 35-41. https://scholar.google.com/scholar?cluster=10779872536070567255&hl=en&as_sdt=0,22 ; https://pdfs.semanticscholar.org/9908/404807bf6b63e05e5345f02bcb23cc739ebd.pdf
Binary classification with strongly unbalanced classes
Class imbalance issues can be addressed with either cost-sensitive learning or resampling. See advantages and disadvantages of cost-sensitive learning vs. sampling, copypasted below: {1} gives a list
Binary classification with strongly unbalanced classes Class imbalance issues can be addressed with either cost-sensitive learning or resampling. See advantages and disadvantages of cost-sensitive learning vs. sampling, copypasted below: {1} gives a list of advantages and disadvantages of cost-sensitive learning vs. sampling: 2.2 Sampling Oversampling and undersampling can be used to alter the class distribution of the training data and both methods have been used to deal with class imbalance [1, 2, 3, 6, 10, 11]. The reason that altering the class distribution of the training data aids learning with highly-skewed data sets is that it effectively imposes non-uniform misclassification costs. For example, if one alters the class distribution of the training set so that the ratio of positive to negative examples goes from 1:1 to 2:1, then one has effectively assigned a misclassification cost ratio of 2:1. This equivalency between altering the class distribution of the training data and altering the misclassification cost ratio is well known and was formally described by Elkan [9]. There are known disadvantages associated with the use of sampling to implement cost-sensitive learning. The disadvantage with undersampling is that it discards potentially useful data. The main disadvantage with oversampling, from our perspective, is that by making exact copies of existing examples, it makes overfitting likely. In fact, with oversampling it is quite common for a learner to generate a classification rule to cover a single, replicated, example. A second disadvantage of oversampling is that it increases the number of training examples, thus increasing the learning time. 2.3 Why Use Sampling? Given the disadvantages with sampling, it is worth asking why anyone would use it rather than a cost-sensitive learning algorithm for dealing with data with a skewed class distribution and non-uniform misclassification costs. There are several reasons for this. The most obvious reason is there are not cost-sensitive implementations of all learning algorithms and therefore a wrapper-based approach using sampling is the only option. While this is certainly less true today than in the past, many learning algorithms (e.g., C4.5) still do not directly handle costs in the learning process. A second reason for using sampling is that many highly skewed data sets are enormous and the size of the training set must be reduced in order for learning to be feasible. In this case, undersampling seems to be a reasonable, and valid, strategy. In this paper we do not consider the need to reduce the training set size. We would point out, however, that if one needs to discard some training data, it still might be beneficial to discard some of the majority class examples in order to reduce the training set size to the required size, and then also employ a cost-sensitive learning algorithm, so that the amount of discarded training data is minimized. A final reason that may have contributed to the use of sampling rather than a cost-sensitive learning algorithm is that misclassification costs are often unknown. However, this is not a valid reason for using sampling over a costsensitive learning algorithm, since the analogous issue arises with sampling—what should the class distribution of the final training data be? If this cost information is not known, a measure such as the area under the ROC curve could be used to measure classifier performance and both approaches could then empirically determine the proper cost ratio/class distribution. They also did a series of experiments, which was inconclusive: Based on the results from all of the data sets, there is no definitive winner between cost-sensitive learning, oversampling and undersampling They then try to understand which criteria in the datasets may hint at which technique is better fitted. They also remark that SMOTE may bring some enhancements: There are a variety of enhancements that people have made to improve the effectiveness of sampling. Some of these enhancements include introducing new “synthetic” examples when oversampling [5 -> SMOTE], deleting less useful majority- class examples when undersampling [11] and using multiple sub-samples when undersampling such than each example is used in at least one sub-sample [3]. While these techniques have been compared to oversampling and undersampling, they generally have not been compared to cost-sensitive learning algorithms. This would be worth studying in the future. {1} Weiss, Gary M., Kate McCarthy, and Bibi Zabar. "Cost-sensitive learning vs. sampling: Which is best for handling unbalanced classes with unequal error costs?." DMIN 7 (2007): 35-41. https://scholar.google.com/scholar?cluster=10779872536070567255&hl=en&as_sdt=0,22 ; https://pdfs.semanticscholar.org/9908/404807bf6b63e05e5345f02bcb23cc739ebd.pdf
Binary classification with strongly unbalanced classes Class imbalance issues can be addressed with either cost-sensitive learning or resampling. See advantages and disadvantages of cost-sensitive learning vs. sampling, copypasted below: {1} gives a list
3,710
Binary classification with strongly unbalanced classes
Several answers to this query have already provided several different approaches, all valid. This suggestion is from a paper and associated software by Gary King, eminent political scientist at Harvard. He has co-authored a paper titled Logistic Regression in Rare Events Data which provides some fairly cogent solutions. Here's the abstract: We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros ("nonevents"). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all variable events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed. Here's a link to the paper ... http://gking.harvard.edu/files/abs/0s-abs.shtml
Binary classification with strongly unbalanced classes
Several answers to this query have already provided several different approaches, all valid. This suggestion is from a paper and associated software by Gary King, eminent political scientist at Harvar
Binary classification with strongly unbalanced classes Several answers to this query have already provided several different approaches, all valid. This suggestion is from a paper and associated software by Gary King, eminent political scientist at Harvard. He has co-authored a paper titled Logistic Regression in Rare Events Data which provides some fairly cogent solutions. Here's the abstract: We study rare events data, binary dependent variables with dozens to thousands of times fewer ones (events, such as wars, vetoes, cases of political activism, or epidemiological infections) than zeros ("nonevents"). In many literatures, these variables have proven difficult to explain and predict, a problem that seems to have at least two sources. First, popular statistical procedures, such as logistic regression, can sharply underestimate the probability of rare events. We recommend corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. Second, commonly used data collection strategies are grossly inefficient for rare events data. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables, such as in international conflict data with more than a quarter-million dyads, only a few of which are at war. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all variable events (e.g., wars) and a tiny fraction of nonevents (peace). This enables scholars to save as much as 99% of their (nonfixed) data collection costs or to collect much more meaningful explanatory variables. We provide methods that link these two results, enabling both types of corrections to work simultaneously, and software that implements the methods developed. Here's a link to the paper ... http://gking.harvard.edu/files/abs/0s-abs.shtml
Binary classification with strongly unbalanced classes Several answers to this query have already provided several different approaches, all valid. This suggestion is from a paper and associated software by Gary King, eminent political scientist at Harvar
3,711
Binary classification with strongly unbalanced classes
I have to disagree with all of the answers. The original problem is not appropriate for classification at all but calls for an analysis of tendencies. See http://fharrell.com/post/classification Miscasting the task as a classification task is what has caused so much work for everyone, and has caused invalid statistical methods that discard valuable data to be considered.
Binary classification with strongly unbalanced classes
I have to disagree with all of the answers. The original problem is not appropriate for classification at all but calls for an analysis of tendencies. See http://fharrell.com/post/classification Mis
Binary classification with strongly unbalanced classes I have to disagree with all of the answers. The original problem is not appropriate for classification at all but calls for an analysis of tendencies. See http://fharrell.com/post/classification Miscasting the task as a classification task is what has caused so much work for everyone, and has caused invalid statistical methods that discard valuable data to be considered.
Binary classification with strongly unbalanced classes I have to disagree with all of the answers. The original problem is not appropriate for classification at all but calls for an analysis of tendencies. See http://fharrell.com/post/classification Mis
3,712
Binary classification with strongly unbalanced classes
Development of classifiers for datasets with imbalanced classes is a common problem in machine learning. Density-based methods can have significant merits over "traditional classifers" in such situation. A density-based method estimates the unknown density $\hat{p}(x|y \in C)$, where $C$ is the most dominant class (In your example, $C = \{x: y_i = 0\}$). Once a density estimate is trained, you can predict the probability that an unseen test record $x^*$ belongs to this density estimate or not. If the probability is sufficiently small, less than a specified threshold (usually obtained through a validation phase), then $\hat{y}(x^*) \notin C$, otherwise $\hat{y}(x^*) \in C$ You can refer to the following paper: "A computable Plug-in estimator of Minimum Volume Sets for Novelty Detection," C. Park, J. Huang and Y. Ding, Operations Research, 58(5), 2013.
Binary classification with strongly unbalanced classes
Development of classifiers for datasets with imbalanced classes is a common problem in machine learning. Density-based methods can have significant merits over "traditional classifers" in such situati
Binary classification with strongly unbalanced classes Development of classifiers for datasets with imbalanced classes is a common problem in machine learning. Density-based methods can have significant merits over "traditional classifers" in such situation. A density-based method estimates the unknown density $\hat{p}(x|y \in C)$, where $C$ is the most dominant class (In your example, $C = \{x: y_i = 0\}$). Once a density estimate is trained, you can predict the probability that an unseen test record $x^*$ belongs to this density estimate or not. If the probability is sufficiently small, less than a specified threshold (usually obtained through a validation phase), then $\hat{y}(x^*) \notin C$, otherwise $\hat{y}(x^*) \in C$ You can refer to the following paper: "A computable Plug-in estimator of Minimum Volume Sets for Novelty Detection," C. Park, J. Huang and Y. Ding, Operations Research, 58(5), 2013.
Binary classification with strongly unbalanced classes Development of classifiers for datasets with imbalanced classes is a common problem in machine learning. Density-based methods can have significant merits over "traditional classifers" in such situati
3,713
Binary classification with strongly unbalanced classes
This is the sort of problem where Anomaly Detection is a useful approach. This is basically what rodrigo described in his answer, in which you determine the statistical profile of your training class, and set a probability threshold beyond which future measurements are determined not to belong to that class. Here is a video tutorial, which should get you started. Once you have absorbed that, I would recommend looking up Kernel Density Estimation.
Binary classification with strongly unbalanced classes
This is the sort of problem where Anomaly Detection is a useful approach. This is basically what rodrigo described in his answer, in which you determine the statistical profile of your training class,
Binary classification with strongly unbalanced classes This is the sort of problem where Anomaly Detection is a useful approach. This is basically what rodrigo described in his answer, in which you determine the statistical profile of your training class, and set a probability threshold beyond which future measurements are determined not to belong to that class. Here is a video tutorial, which should get you started. Once you have absorbed that, I would recommend looking up Kernel Density Estimation.
Binary classification with strongly unbalanced classes This is the sort of problem where Anomaly Detection is a useful approach. This is basically what rodrigo described in his answer, in which you determine the statistical profile of your training class,
3,714
List of situations where a Bayesian approach is simpler, more practical, or more convenient
(1) In contexts where the likelihood function is intractable (at least numerically), the use of the Bayesian approach, by means of Approximate Bayesian Computation (ABC), has gained ground over some frequentist competitors such as composite likelihoods (1, 2) or the empirical likelihood because it tends to be easier to implement (not necessarily correct). Due to this, the use of ABC has become popular in areas where it is common to come across intractable likelihoods such as biology, genetics, and ecology. Here, we could mention an ocean of examples. Some examples of intractable likelihoods are Superposed processes. Cox and Smith (1954) proposed a model in the context of neurophysiology which consists of $N$ superposed point processes. For example consider the times between the electrical pulses observed at some part of the brain that were emited by several neurones during a certain period. This sample contains non iid observations which makes difficult to construct the corresponding likelihood, complicating the estimation of the corresponding parameters. A (partial)frequentist solution was recently proposed in this paper. The implementation of the ABC approach has also been recently studied and it can be found here. Population genetics is another example of models leading to intractable likelihoods. In this case the intractability has a different nature: the likelihood is expressed in terms of a multidimensional integral (sometimes of dimension $1000+$) which would take a couple of decades just to evaluate it at a single point. This area is probably ABC's headquarters.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
(1) In contexts where the likelihood function is intractable (at least numerically), the use of the Bayesian approach, by means of Approximate Bayesian Computation (ABC), has gained ground over some f
List of situations where a Bayesian approach is simpler, more practical, or more convenient (1) In contexts where the likelihood function is intractable (at least numerically), the use of the Bayesian approach, by means of Approximate Bayesian Computation (ABC), has gained ground over some frequentist competitors such as composite likelihoods (1, 2) or the empirical likelihood because it tends to be easier to implement (not necessarily correct). Due to this, the use of ABC has become popular in areas where it is common to come across intractable likelihoods such as biology, genetics, and ecology. Here, we could mention an ocean of examples. Some examples of intractable likelihoods are Superposed processes. Cox and Smith (1954) proposed a model in the context of neurophysiology which consists of $N$ superposed point processes. For example consider the times between the electrical pulses observed at some part of the brain that were emited by several neurones during a certain period. This sample contains non iid observations which makes difficult to construct the corresponding likelihood, complicating the estimation of the corresponding parameters. A (partial)frequentist solution was recently proposed in this paper. The implementation of the ABC approach has also been recently studied and it can be found here. Population genetics is another example of models leading to intractable likelihoods. In this case the intractability has a different nature: the likelihood is expressed in terms of a multidimensional integral (sometimes of dimension $1000+$) which would take a couple of decades just to evaluate it at a single point. This area is probably ABC's headquarters.
List of situations where a Bayesian approach is simpler, more practical, or more convenient (1) In contexts where the likelihood function is intractable (at least numerically), the use of the Bayesian approach, by means of Approximate Bayesian Computation (ABC), has gained ground over some f
3,715
List of situations where a Bayesian approach is simpler, more practical, or more convenient
As Bayesian software improves, the "easier to apply" issue becomes moot. Bayesian software is becoming packaged in easier and easier forms. A recent case in point is from an article titled, Bayesian estimation supersedes the t test. The following web site provides links to the article and software: http://www.indiana.edu/~kruschke/BEST/ An excerpt from the article's introduction: ... some people have the impression that conclusions from NHST and Bayesian methods tend to agree in simple situations such as comparison of two groups: “Thus, if your primary question of interest can be simply expressed in a form amenable to a t test, say, there really is no need to try and apply the full Bayesian machinery to so simple a problem” (Brooks, 2003, p. 2694). This article shows, to the contrary, that Bayesian parameter estimation provides much richer information than the NHST t test and that its conclusions can differ from those of the NHST t test. Decisions based on Bayesian parameter estimation are better founded than those based on NHST, whether the decisions derived by the two methods agree or not.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
As Bayesian software improves, the "easier to apply" issue becomes moot. Bayesian software is becoming packaged in easier and easier forms. A recent case in point is from an article titled, Bayesian e
List of situations where a Bayesian approach is simpler, more practical, or more convenient As Bayesian software improves, the "easier to apply" issue becomes moot. Bayesian software is becoming packaged in easier and easier forms. A recent case in point is from an article titled, Bayesian estimation supersedes the t test. The following web site provides links to the article and software: http://www.indiana.edu/~kruschke/BEST/ An excerpt from the article's introduction: ... some people have the impression that conclusions from NHST and Bayesian methods tend to agree in simple situations such as comparison of two groups: “Thus, if your primary question of interest can be simply expressed in a form amenable to a t test, say, there really is no need to try and apply the full Bayesian machinery to so simple a problem” (Brooks, 2003, p. 2694). This article shows, to the contrary, that Bayesian parameter estimation provides much richer information than the NHST t test and that its conclusions can differ from those of the NHST t test. Decisions based on Bayesian parameter estimation are better founded than those based on NHST, whether the decisions derived by the two methods agree or not.
List of situations where a Bayesian approach is simpler, more practical, or more convenient As Bayesian software improves, the "easier to apply" issue becomes moot. Bayesian software is becoming packaged in easier and easier forms. A recent case in point is from an article titled, Bayesian e
3,716
List of situations where a Bayesian approach is simpler, more practical, or more convenient
I am trained in frequentist statistics (econometrics actually), but I have never had a confrontational stance towards the Bayesian approach, since my point of view is that the philosophical source of this "epic" battle was fundamentally misguided from the start (I have aired my views here). In fact I plan to also train myself in the Bayesian approach in the immediate future. Why? Because one of the aspects of frequentist statistics that fascinates me the most as a mathematical and conceptual endeavor, at the same time it troubles me the most: sample-size asymptotics. At least in econometrics, almost no serious paper today claims that any of the various estimators usually applied in frequentist econometrics possesses any of the desirable "small-sample" properties we would want from an estimator. They all rely on asymptotic properties to justify their use. Most of the tests used have desirable properties only asymptotically... But we are not in "z-land / t-land" anymore: all the sophisticated (and formidable) apparatus of modern frequentist estimation and inference is also highly idiosyncratic- meaning that sometimes, a laaaaaaaaa...aaaarge sample is indeed needed in order for these precious asymptotic properties to emerge and affect favorably the estimates derived from the estimators, as has been proven by various simulations. Meaning tens of thousands of observations -which although they start to become available for some fields of economic activity (like labor or financial markets), there are others (like macroeconomics) in which they will never do (at least during my life span). And I am pretty bothered by that, because it renders the derived results truly uncertain (not just stochastic). Bayesian econometrics for small samples do not rely on asymptotic results. "But they rely on the subjective prior!" is the usual response... to which, my simple, practical, answer is the following : "if the phenomenon is old and studied before, the prior can be estimated from past data. If the phenomenon is new, by what else if not by subjective arguments can we start the discussion about it?
List of situations where a Bayesian approach is simpler, more practical, or more convenient
I am trained in frequentist statistics (econometrics actually), but I have never had a confrontational stance towards the Bayesian approach, since my point of view is that the philosophical source of
List of situations where a Bayesian approach is simpler, more practical, or more convenient I am trained in frequentist statistics (econometrics actually), but I have never had a confrontational stance towards the Bayesian approach, since my point of view is that the philosophical source of this "epic" battle was fundamentally misguided from the start (I have aired my views here). In fact I plan to also train myself in the Bayesian approach in the immediate future. Why? Because one of the aspects of frequentist statistics that fascinates me the most as a mathematical and conceptual endeavor, at the same time it troubles me the most: sample-size asymptotics. At least in econometrics, almost no serious paper today claims that any of the various estimators usually applied in frequentist econometrics possesses any of the desirable "small-sample" properties we would want from an estimator. They all rely on asymptotic properties to justify their use. Most of the tests used have desirable properties only asymptotically... But we are not in "z-land / t-land" anymore: all the sophisticated (and formidable) apparatus of modern frequentist estimation and inference is also highly idiosyncratic- meaning that sometimes, a laaaaaaaaa...aaaarge sample is indeed needed in order for these precious asymptotic properties to emerge and affect favorably the estimates derived from the estimators, as has been proven by various simulations. Meaning tens of thousands of observations -which although they start to become available for some fields of economic activity (like labor or financial markets), there are others (like macroeconomics) in which they will never do (at least during my life span). And I am pretty bothered by that, because it renders the derived results truly uncertain (not just stochastic). Bayesian econometrics for small samples do not rely on asymptotic results. "But they rely on the subjective prior!" is the usual response... to which, my simple, practical, answer is the following : "if the phenomenon is old and studied before, the prior can be estimated from past data. If the phenomenon is new, by what else if not by subjective arguments can we start the discussion about it?
List of situations where a Bayesian approach is simpler, more practical, or more convenient I am trained in frequentist statistics (econometrics actually), but I have never had a confrontational stance towards the Bayesian approach, since my point of view is that the philosophical source of
3,717
List of situations where a Bayesian approach is simpler, more practical, or more convenient
This is a late reply, nevertheless I hope it adds something. I have been trained in telecommunication where most of the time we use the Bayesian approach. Here is a simple example: Suppose you can transmit four possible signals of +5, +2.5, -2.5, and -5 volts. One of the signals from this set is transmitted, but the signal is corrupted by Gaussian noise by the time it reaches the receiving end. In practice, the signal is also attenuated, but we will drop this issue for simplicity. The question is: If you are at the receiving end, how do you design a detector that tell you which one of these signals was originally transmitted? This problem obviously lies in the domain of hypothesis testing. However, you can't use p-values, since significance testing can potentially reject all four possible hypotheses, and you know that one of these signals was actually transmitted. We can use Neyman-Pearson method to design a detector in principle, but this method works best for binary hypotheses. For multiple hypotheses, it becomes too clumsy when you need to deal with a number constrains for false alarm probabilities. A simple alternative is given by the Bayesian hypothesis testing. Any of these signals could have been chosen to be transmitted, so the prior is equiprobable. In such equiprobable cases, the method boils down to choosing the signal with maximum likelihood. This method can be given a nice geometric interpretation: choose the signal which happens to be closest to the received signal. This also leads to the partition of the decision space into a number of decision regions, such that if the received signal were to fall within a particular region, then it is decided that the hypothesis associated with that decision region is true. Thus the design of a detector is made easy.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
This is a late reply, nevertheless I hope it adds something. I have been trained in telecommunication where most of the time we use the Bayesian approach. Here is a simple example: Suppose you can tr
List of situations where a Bayesian approach is simpler, more practical, or more convenient This is a late reply, nevertheless I hope it adds something. I have been trained in telecommunication where most of the time we use the Bayesian approach. Here is a simple example: Suppose you can transmit four possible signals of +5, +2.5, -2.5, and -5 volts. One of the signals from this set is transmitted, but the signal is corrupted by Gaussian noise by the time it reaches the receiving end. In practice, the signal is also attenuated, but we will drop this issue for simplicity. The question is: If you are at the receiving end, how do you design a detector that tell you which one of these signals was originally transmitted? This problem obviously lies in the domain of hypothesis testing. However, you can't use p-values, since significance testing can potentially reject all four possible hypotheses, and you know that one of these signals was actually transmitted. We can use Neyman-Pearson method to design a detector in principle, but this method works best for binary hypotheses. For multiple hypotheses, it becomes too clumsy when you need to deal with a number constrains for false alarm probabilities. A simple alternative is given by the Bayesian hypothesis testing. Any of these signals could have been chosen to be transmitted, so the prior is equiprobable. In such equiprobable cases, the method boils down to choosing the signal with maximum likelihood. This method can be given a nice geometric interpretation: choose the signal which happens to be closest to the received signal. This also leads to the partition of the decision space into a number of decision regions, such that if the received signal were to fall within a particular region, then it is decided that the hypothesis associated with that decision region is true. Thus the design of a detector is made easy.
List of situations where a Bayesian approach is simpler, more practical, or more convenient This is a late reply, nevertheless I hope it adds something. I have been trained in telecommunication where most of the time we use the Bayesian approach. Here is a simple example: Suppose you can tr
3,718
List of situations where a Bayesian approach is simpler, more practical, or more convenient
(2) Stress-strength models. The use of stress-strength models is popular in reliability. The basic idea consists of estimating the parameter $\theta=P(X<Y)$ where $X$ and $Y$ are random variables. Interestingly, the calculation of the profile likelihood of this parameter is quite difficult in general (even numerically ) except for some toy examples such as the exponential or normal case. For this reason, ad hoc frequentist solutions need to be considered such as the empirical likelihood (see) or confidence intervals whose construction is difficult as well in a general framework. On the other hand, the use of a Bayesian approach is very simple given that if you have a sample of the posterior distribution of the parameters of the distributions of $X$ and $Y$, then you can easily transform them into a sample of the posterior of $\theta$. Let $X$ be a random variable with density and distribution given respectively by $f(x;\xi_1)$ and $F(x;\xi_1)$. Similarly, let $Y$ be a random variable with density and distribution given respectively by $g(y;\xi_2)$ and $G(y;\xi_2)$. Then $$\theta = \int F(y;\xi_1)g(y;\xi_2)dy. \tag{$\star$}$$ Note that this parameter is a function of the parameters $(\xi_1,\xi_2)$. In the exponential and normal cases, this can be expressed in closed form (see) but this is not the case in general (see this paper for an example). This complicates the calculation of the profile likelihood of $\theta$ and consequently the classical interval inference on this parameter. The main problem can be summarised as follows "The parameter of interest is an unknown/complicated function of the model-parameters and therefore we cannot find a reparameterisation that involves the parameter of interest". From a Bayesian perspective this is not an issue given that if we have a sample from the posterior distribution of $(\xi_1,\xi_2)$, then we can simply input these samples into $(\star)$ in order to obtain a sample of the posterior of $\theta$ and provide interval inference for this parameter.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
(2) Stress-strength models. The use of stress-strength models is popular in reliability. The basic idea consists of estimating the parameter $\theta=P(X<Y)$ where $X$ and $Y$ are random variables. Int
List of situations where a Bayesian approach is simpler, more practical, or more convenient (2) Stress-strength models. The use of stress-strength models is popular in reliability. The basic idea consists of estimating the parameter $\theta=P(X<Y)$ where $X$ and $Y$ are random variables. Interestingly, the calculation of the profile likelihood of this parameter is quite difficult in general (even numerically ) except for some toy examples such as the exponential or normal case. For this reason, ad hoc frequentist solutions need to be considered such as the empirical likelihood (see) or confidence intervals whose construction is difficult as well in a general framework. On the other hand, the use of a Bayesian approach is very simple given that if you have a sample of the posterior distribution of the parameters of the distributions of $X$ and $Y$, then you can easily transform them into a sample of the posterior of $\theta$. Let $X$ be a random variable with density and distribution given respectively by $f(x;\xi_1)$ and $F(x;\xi_1)$. Similarly, let $Y$ be a random variable with density and distribution given respectively by $g(y;\xi_2)$ and $G(y;\xi_2)$. Then $$\theta = \int F(y;\xi_1)g(y;\xi_2)dy. \tag{$\star$}$$ Note that this parameter is a function of the parameters $(\xi_1,\xi_2)$. In the exponential and normal cases, this can be expressed in closed form (see) but this is not the case in general (see this paper for an example). This complicates the calculation of the profile likelihood of $\theta$ and consequently the classical interval inference on this parameter. The main problem can be summarised as follows "The parameter of interest is an unknown/complicated function of the model-parameters and therefore we cannot find a reparameterisation that involves the parameter of interest". From a Bayesian perspective this is not an issue given that if we have a sample from the posterior distribution of $(\xi_1,\xi_2)$, then we can simply input these samples into $(\star)$ in order to obtain a sample of the posterior of $\theta$ and provide interval inference for this parameter.
List of situations where a Bayesian approach is simpler, more practical, or more convenient (2) Stress-strength models. The use of stress-strength models is popular in reliability. The basic idea consists of estimating the parameter $\theta=P(X<Y)$ where $X$ and $Y$ are random variables. Int
3,719
List of situations where a Bayesian approach is simpler, more practical, or more convenient
So called 'Frequentist' statistical tests are typically equivalent to the in principle more complex Bayesian approach under certain assumptions. When these assumptions are applicable, then either approach will give the same result, so it is safe to use the easier to apply Frequentist test. The Bayesian approach is safer in general because is makes the assumptions explicit but if you know what you are doing the Frequentist test is often just as good as a Bayesian approach and typically easier to apply.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
So called 'Frequentist' statistical tests are typically equivalent to the in principle more complex Bayesian approach under certain assumptions. When these assumptions are applicable, then either appr
List of situations where a Bayesian approach is simpler, more practical, or more convenient So called 'Frequentist' statistical tests are typically equivalent to the in principle more complex Bayesian approach under certain assumptions. When these assumptions are applicable, then either approach will give the same result, so it is safe to use the easier to apply Frequentist test. The Bayesian approach is safer in general because is makes the assumptions explicit but if you know what you are doing the Frequentist test is often just as good as a Bayesian approach and typically easier to apply.
List of situations where a Bayesian approach is simpler, more practical, or more convenient So called 'Frequentist' statistical tests are typically equivalent to the in principle more complex Bayesian approach under certain assumptions. When these assumptions are applicable, then either appr
3,720
List of situations where a Bayesian approach is simpler, more practical, or more convenient
(I'll try what I thought would be the most typical kind of answer.) Let's say you have a situation where there are several variables and one response, and you know a good deal about how one of the variables ought to be related to the response, but not as much about the others. In a situation like this, if you were to run a standard multiple regression analysis, that prior knowledge would not be taken into account. A meta-analysis might be conducted afterwards, which might be interesting in shedding light on whether the current result was consistent with the other findings and might allow a slightly more precise estimate (by including the prior knowledge at that point). But that approach wouldn't allow what was known about that variable to influence the estimates of the other variables. Another option is that it would be possible to code, and optimize over, your own function that fixes the relationship with the variable in question, and finds parameter values for the other variables that maximize the likelihood of the data given that restriction. The problem here is that whereas the first option does not adequately constrain the beta estimate, this approach over-constrains it. It may be possible to jury-rig some algorithm that would address the situation more appropriately, situations like this seem like ideal candidates for Bayesian analysis. Anyone not dogmatically opposed to the Bayesian approach ought to be willing to try it in cases like this.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
(I'll try what I thought would be the most typical kind of answer.) Let's say you have a situation where there are several variables and one response, and you know a good deal about how one of the va
List of situations where a Bayesian approach is simpler, more practical, or more convenient (I'll try what I thought would be the most typical kind of answer.) Let's say you have a situation where there are several variables and one response, and you know a good deal about how one of the variables ought to be related to the response, but not as much about the others. In a situation like this, if you were to run a standard multiple regression analysis, that prior knowledge would not be taken into account. A meta-analysis might be conducted afterwards, which might be interesting in shedding light on whether the current result was consistent with the other findings and might allow a slightly more precise estimate (by including the prior knowledge at that point). But that approach wouldn't allow what was known about that variable to influence the estimates of the other variables. Another option is that it would be possible to code, and optimize over, your own function that fixes the relationship with the variable in question, and finds parameter values for the other variables that maximize the likelihood of the data given that restriction. The problem here is that whereas the first option does not adequately constrain the beta estimate, this approach over-constrains it. It may be possible to jury-rig some algorithm that would address the situation more appropriately, situations like this seem like ideal candidates for Bayesian analysis. Anyone not dogmatically opposed to the Bayesian approach ought to be willing to try it in cases like this.
List of situations where a Bayesian approach is simpler, more practical, or more convenient (I'll try what I thought would be the most typical kind of answer.) Let's say you have a situation where there are several variables and one response, and you know a good deal about how one of the va
3,721
List of situations where a Bayesian approach is simpler, more practical, or more convenient
Perhaps one of the most straightforward and common cases where the Bayesian approach is easier is the quantifying the uncertainty of parameters. In this answer, I'm not referring to the interpretation of confidence intervals vs. credible intervals. For the moment, let's assume that a user is fine with using either method. With that said, in the Bayesian framework, it's straight forward; it's the marginal variance of the posterior for any individual parameter of interest. Assuming you can sample from the posterior, then just take your samples and compute your variances. Done! In the Frequentist case, this is usually only straightforward in some cases and it's a real pain when it's not. If we have a large number of samples vs. small number of parameters (and who really knows how large is large enough), we can use MLE theory to derive CI's. However, those criteria don't always hold, especially for interesting cases (i.e., mixed effects models). Sometimes we can use bootstrapping, but sometimes we can't! In the cases we can't, it can be really, really hard to derive error estimates, and often require a bit of cleverness (i.e., Greenwood's formula for deriving SE's for Kaplan Meier curves). "Using some cleverness" is not always a reliable recipe!
List of situations where a Bayesian approach is simpler, more practical, or more convenient
Perhaps one of the most straightforward and common cases where the Bayesian approach is easier is the quantifying the uncertainty of parameters. In this answer, I'm not referring to the interpretatio
List of situations where a Bayesian approach is simpler, more practical, or more convenient Perhaps one of the most straightforward and common cases where the Bayesian approach is easier is the quantifying the uncertainty of parameters. In this answer, I'm not referring to the interpretation of confidence intervals vs. credible intervals. For the moment, let's assume that a user is fine with using either method. With that said, in the Bayesian framework, it's straight forward; it's the marginal variance of the posterior for any individual parameter of interest. Assuming you can sample from the posterior, then just take your samples and compute your variances. Done! In the Frequentist case, this is usually only straightforward in some cases and it's a real pain when it's not. If we have a large number of samples vs. small number of parameters (and who really knows how large is large enough), we can use MLE theory to derive CI's. However, those criteria don't always hold, especially for interesting cases (i.e., mixed effects models). Sometimes we can use bootstrapping, but sometimes we can't! In the cases we can't, it can be really, really hard to derive error estimates, and often require a bit of cleverness (i.e., Greenwood's formula for deriving SE's for Kaplan Meier curves). "Using some cleverness" is not always a reliable recipe!
List of situations where a Bayesian approach is simpler, more practical, or more convenient Perhaps one of the most straightforward and common cases where the Bayesian approach is easier is the quantifying the uncertainty of parameters. In this answer, I'm not referring to the interpretatio
3,722
List of situations where a Bayesian approach is simpler, more practical, or more convenient
An area of research in which the Bayesian methods are extremely straightforward and the Frequentist methods are extremely hard to follow is that of Optimal Design. In a simple version of the problem, you would like to estimate a single regression coefficient of a logistic regression as efficiently as possible. You are allowed to take a single sample with $x^{(1)}$ equal to whatever you would like, update your estimate for $\beta$ and then choose your next $x^{(2)}$, etc. until your estimate for $\beta$ meets some accuracy level. The tricky part is that the true value of $\beta$ will dictate what the optimal choice of $x^{(i)}$ is. You might consider using the current estimate of $\hat \beta$ of $\beta$ with the understanding that you are ignoring the error in $\hat \beta$. As such, you can get a maybe only mildly sub-optimal choice of $x^{(i)}$ given a reasonable estimate of $\beta$. But what about when you first start? You have no Frequentist estimate of $\beta$, because you have no data. So you'll need to gather some data (definitely in a very suboptimal manner), without a lot of guiding theory to tell you what to pick. And even after a few picks, the Hauck-Donner effect can still prevent you from having a defined estimate of $\beta$. If you read up on the Frequentist literature about how to deal with this, it's basically "randomly pick $x$'s until there exists a value of $x$ such that there are 0's and 1's above and below that point" (which means the Hauck-Donner effect will not occur). From the Bayesian perspective, this problem is very easy. Start your prior belief about $\beta$. Find the $x$ that will have the maximum effect on posterior distribution Sample using value of $x$ chosen from (2) and update your posterior Repeat steps 2 & 3 until desired accuracy is met The Frequentist literature will bend over backwards to get you try to find reasonable values of $x$ for which you can hopefully take samples at and avoid the Hauck-Donner effect so that you can start taking sub-optimal samples... whereas the Bayesian method is all very easy and takes into account the uncertainty in the parameter of interest.
List of situations where a Bayesian approach is simpler, more practical, or more convenient
An area of research in which the Bayesian methods are extremely straightforward and the Frequentist methods are extremely hard to follow is that of Optimal Design. In a simple version of the problem,
List of situations where a Bayesian approach is simpler, more practical, or more convenient An area of research in which the Bayesian methods are extremely straightforward and the Frequentist methods are extremely hard to follow is that of Optimal Design. In a simple version of the problem, you would like to estimate a single regression coefficient of a logistic regression as efficiently as possible. You are allowed to take a single sample with $x^{(1)}$ equal to whatever you would like, update your estimate for $\beta$ and then choose your next $x^{(2)}$, etc. until your estimate for $\beta$ meets some accuracy level. The tricky part is that the true value of $\beta$ will dictate what the optimal choice of $x^{(i)}$ is. You might consider using the current estimate of $\hat \beta$ of $\beta$ with the understanding that you are ignoring the error in $\hat \beta$. As such, you can get a maybe only mildly sub-optimal choice of $x^{(i)}$ given a reasonable estimate of $\beta$. But what about when you first start? You have no Frequentist estimate of $\beta$, because you have no data. So you'll need to gather some data (definitely in a very suboptimal manner), without a lot of guiding theory to tell you what to pick. And even after a few picks, the Hauck-Donner effect can still prevent you from having a defined estimate of $\beta$. If you read up on the Frequentist literature about how to deal with this, it's basically "randomly pick $x$'s until there exists a value of $x$ such that there are 0's and 1's above and below that point" (which means the Hauck-Donner effect will not occur). From the Bayesian perspective, this problem is very easy. Start your prior belief about $\beta$. Find the $x$ that will have the maximum effect on posterior distribution Sample using value of $x$ chosen from (2) and update your posterior Repeat steps 2 & 3 until desired accuracy is met The Frequentist literature will bend over backwards to get you try to find reasonable values of $x$ for which you can hopefully take samples at and avoid the Hauck-Donner effect so that you can start taking sub-optimal samples... whereas the Bayesian method is all very easy and takes into account the uncertainty in the parameter of interest.
List of situations where a Bayesian approach is simpler, more practical, or more convenient An area of research in which the Bayesian methods are extremely straightforward and the Frequentist methods are extremely hard to follow is that of Optimal Design. In a simple version of the problem,
3,723
How to derive variance-covariance matrix of coefficients in linear regression
This is actually a cool question that challenges your basic understanding of a regression. First take out any initial confusion about notation. We are looking at the regression: $$y=b_0+b_1x+\hat{u}$$ where $b_0$ and $b_1$ are the estimators of the true $\beta_0$ and $\beta_1$, and $\hat{u}$ are the residuals of the regression. Note that the underlying true and unboserved regression is thus denoted as: $$y=\beta_0+\beta_1x+u$$ With the expectation of $E[u]=0$ and variance $E[u^2]=\sigma^2$. Some books denote $b$ as $\hat{\beta}$ and we adapt this convention here. We also make use the matrix notation, where b is the 2x1 vector that holds the estimators of $\beta=[\beta_0, \beta_1]'$, namely $b=[b_0, b_1]'$. (Also for the sake of clarity I treat X as fixed in the following calculations.) Now to your question. Your formula for the covariance is indeed correct, that is: $$\sigma(b_0, b_1) = E(b_0 b_1) - E(b_0)E(b_1) = E(b_0 b_1) - \beta_0 \beta_1 $$ I think you want to know how comes we have the true unobserved coefficients $\beta_0, \beta_1$ in this formula? They actually get cancelled out if we take it a step further by expanding the formula. To see this, note that the population variance of the estimator is given by: $$Var(\hat\beta)=\sigma^2(X'X)^{-1}$$ This matrix holds the variances in the diagonal elements and covariances in the off-diagonal elements. To arrive to the above formula, let's generalize your claim by using matrix notation. Let us therefore denote variance with $Var[\cdot]$ and expectation with $E[\cdot]$. $$Var[b]=E[b^2]-E[b]E[b']$$ Essentially we have the general variance formula, just using matrix notation. The equation resolves when substituting in the standard expression for the estimator $b=(X'X)^{-1}X'y$. Also assume $E[b]=\beta$ being an unbiased estimator. Hence, we obtain: $$E[((X'X)^{-1}X'y)^2] - \underset{2 \times 2}{\beta^2}$$ Note that we have on the right hand side $\beta^2$ - 2x2 matrix, namely $bb'$, but you may at this point already guess what will happen with this term shortly. Replacing $y$ with our expression for the true underlying data generating process above, we have: \begin{align*} E\Big[\Big((X'X)^{-1}X'y\Big)^2\Big] - \beta^2 &= E\Big[\Big((X'X)^{-1}X'(X\beta+u)\Big)^2\Big]-\beta^2 \\ &= E\Big[\Big(\underbrace{(X'X)^{-1}X'X}_{=I}\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\ &= E\Big[\Big(\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\ &= \beta^2+E\Big[\Big(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \end{align*} since $E[u]=0$. Furthermore, the quadratic $\beta^2$ term cancels out as anticipated. Thus we have: $$Var[b]=((X'X)^{-1}X')^2E[u^2]$$ By linearity of expectations. Note that by assumption $E[u^2]=\sigma^2$ and $((X'X)^{-1}X')^2=(X'X)^{-1}X'X(X'X)'^{-1}=(X'X)^{-1}$ since $X'X$ is a $K\times K$ symetric matrix and thus the same as its transpose. Finally we arrive at $$Var[b]=\sigma^2(X'X)^{-1}$$ Now that we got rid of all $\beta$ terms. Intuitively, the variance of the estimator is independent of the value of true underlying coefficient, as this is not a random variable per se. The result is valid for all individual elements in the variance covariance matrix as shown in the book thus also valid for the off diagonal elements as well with $\beta_0\beta_1$ to cancel out respectively. The only problem was that you had applied the general formula for the variance which does not reflect this cancellation at first. Ultimately, the variance of the coefficients reduces to $\sigma^2(X'X)^{-1}$ and independent of $\beta$. But what does this mean? (I believe you asked also for a more general understanding of the general covariance matrix) Look at the formula in the book. It simply asserts that the variance of the estimator increases for when the true underlying error term is more noisy ($\sigma^2$ increases), but decreases for when the spread of X increases. Because having more observations spread around the true value, lets you in general build an estimator that is more accurate and thus closer to the true $\beta$. On the other hand, the covariance terms on the off-diagonal become practically relevant in hypothesis testing of joint hypotheses such as $b_0=b_1=0$. Other than that they are a bit of a fudge, really. Hope this clarifies all questions.
How to derive variance-covariance matrix of coefficients in linear regression
This is actually a cool question that challenges your basic understanding of a regression. First take out any initial confusion about notation. We are looking at the regression: $$y=b_0+b_1x+\hat{u}$
How to derive variance-covariance matrix of coefficients in linear regression This is actually a cool question that challenges your basic understanding of a regression. First take out any initial confusion about notation. We are looking at the regression: $$y=b_0+b_1x+\hat{u}$$ where $b_0$ and $b_1$ are the estimators of the true $\beta_0$ and $\beta_1$, and $\hat{u}$ are the residuals of the regression. Note that the underlying true and unboserved regression is thus denoted as: $$y=\beta_0+\beta_1x+u$$ With the expectation of $E[u]=0$ and variance $E[u^2]=\sigma^2$. Some books denote $b$ as $\hat{\beta}$ and we adapt this convention here. We also make use the matrix notation, where b is the 2x1 vector that holds the estimators of $\beta=[\beta_0, \beta_1]'$, namely $b=[b_0, b_1]'$. (Also for the sake of clarity I treat X as fixed in the following calculations.) Now to your question. Your formula for the covariance is indeed correct, that is: $$\sigma(b_0, b_1) = E(b_0 b_1) - E(b_0)E(b_1) = E(b_0 b_1) - \beta_0 \beta_1 $$ I think you want to know how comes we have the true unobserved coefficients $\beta_0, \beta_1$ in this formula? They actually get cancelled out if we take it a step further by expanding the formula. To see this, note that the population variance of the estimator is given by: $$Var(\hat\beta)=\sigma^2(X'X)^{-1}$$ This matrix holds the variances in the diagonal elements and covariances in the off-diagonal elements. To arrive to the above formula, let's generalize your claim by using matrix notation. Let us therefore denote variance with $Var[\cdot]$ and expectation with $E[\cdot]$. $$Var[b]=E[b^2]-E[b]E[b']$$ Essentially we have the general variance formula, just using matrix notation. The equation resolves when substituting in the standard expression for the estimator $b=(X'X)^{-1}X'y$. Also assume $E[b]=\beta$ being an unbiased estimator. Hence, we obtain: $$E[((X'X)^{-1}X'y)^2] - \underset{2 \times 2}{\beta^2}$$ Note that we have on the right hand side $\beta^2$ - 2x2 matrix, namely $bb'$, but you may at this point already guess what will happen with this term shortly. Replacing $y$ with our expression for the true underlying data generating process above, we have: \begin{align*} E\Big[\Big((X'X)^{-1}X'y\Big)^2\Big] - \beta^2 &= E\Big[\Big((X'X)^{-1}X'(X\beta+u)\Big)^2\Big]-\beta^2 \\ &= E\Big[\Big(\underbrace{(X'X)^{-1}X'X}_{=I}\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\ &= E\Big[\Big(\beta+(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \\ &= \beta^2+E\Big[\Big(X'X)^{-1}X'u\Big)^2\Big]-\beta^2 \end{align*} since $E[u]=0$. Furthermore, the quadratic $\beta^2$ term cancels out as anticipated. Thus we have: $$Var[b]=((X'X)^{-1}X')^2E[u^2]$$ By linearity of expectations. Note that by assumption $E[u^2]=\sigma^2$ and $((X'X)^{-1}X')^2=(X'X)^{-1}X'X(X'X)'^{-1}=(X'X)^{-1}$ since $X'X$ is a $K\times K$ symetric matrix and thus the same as its transpose. Finally we arrive at $$Var[b]=\sigma^2(X'X)^{-1}$$ Now that we got rid of all $\beta$ terms. Intuitively, the variance of the estimator is independent of the value of true underlying coefficient, as this is not a random variable per se. The result is valid for all individual elements in the variance covariance matrix as shown in the book thus also valid for the off diagonal elements as well with $\beta_0\beta_1$ to cancel out respectively. The only problem was that you had applied the general formula for the variance which does not reflect this cancellation at first. Ultimately, the variance of the coefficients reduces to $\sigma^2(X'X)^{-1}$ and independent of $\beta$. But what does this mean? (I believe you asked also for a more general understanding of the general covariance matrix) Look at the formula in the book. It simply asserts that the variance of the estimator increases for when the true underlying error term is more noisy ($\sigma^2$ increases), but decreases for when the spread of X increases. Because having more observations spread around the true value, lets you in general build an estimator that is more accurate and thus closer to the true $\beta$. On the other hand, the covariance terms on the off-diagonal become practically relevant in hypothesis testing of joint hypotheses such as $b_0=b_1=0$. Other than that they are a bit of a fudge, really. Hope this clarifies all questions.
How to derive variance-covariance matrix of coefficients in linear regression This is actually a cool question that challenges your basic understanding of a regression. First take out any initial confusion about notation. We are looking at the regression: $$y=b_0+b_1x+\hat{u}$
3,724
How to derive variance-covariance matrix of coefficients in linear regression
In your case we have $$X'X=\begin{bmatrix}n & \sum X_i\\\sum X_i & \sum X_i^2\end{bmatrix}$$ Invert this matrix and you will get the desired result.
How to derive variance-covariance matrix of coefficients in linear regression
In your case we have $$X'X=\begin{bmatrix}n & \sum X_i\\\sum X_i & \sum X_i^2\end{bmatrix}$$ Invert this matrix and you will get the desired result.
How to derive variance-covariance matrix of coefficients in linear regression In your case we have $$X'X=\begin{bmatrix}n & \sum X_i\\\sum X_i & \sum X_i^2\end{bmatrix}$$ Invert this matrix and you will get the desired result.
How to derive variance-covariance matrix of coefficients in linear regression In your case we have $$X'X=\begin{bmatrix}n & \sum X_i\\\sum X_i & \sum X_i^2\end{bmatrix}$$ Invert this matrix and you will get the desired result.
3,725
How to derive variance-covariance matrix of coefficients in linear regression
Maximum likelihood solution: $ \mathcal{L}(\beta_0,\beta_1|\sigma,\epsilon_1,\ldots,\epsilon_n) = \prod\limits_{i=1}^{n}\frac{1}{\sigma\sqrt{2\pi}} \exp\!\left[-\frac{\epsilon_i^2}{2\sigma^2}\right] \mbox{, where } \epsilon_i = \beta_0 + \beta_1 x_i - y_i$ $ \mathcal{LL}(\beta_0,\beta_1|\sigma,x_1,y_1,\ldots,x_n,y_n) = \sum\limits_{i=1}^{n}\ln\!\left[\frac{1}{\sigma\sqrt{2\pi}}\right] - \frac{(\beta_0 + \beta_1 x_i - y_i)^2}{2\sigma^2}$ Estimating the covariance matrix of the regression coefficients from the Fisher Information: $ \left[ \begin{array}{cc} s[\beta_0]^2 & s[\beta_0,\beta_1] \\ s[\beta_0,\beta_1] & s[\beta_1]^2 \\ \end{array} \right] = -\mathcal{H}^{-1} = -\left[ \begin{array}{cc} \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_0^2}} & \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_0}\partial{\beta_1}} \\ \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_0}\partial{\beta_1}} & \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_1^2}} \end{array} \right]^{-1} \\ = -\frac{1}{\sigma^2} \left[ \begin{array}{cc} n & \sum_{i=1}^{n}x_i \\ \sum_{i=1}^{n}x_i & \sum_{i=1}^{n}x_i^2 \end{array} \right]^{-1} = \left[ \begin{array}{cc} \frac{\sigma^2\sum_{i=1}^{n}x_i^2}{n\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} & -\frac{\sigma^2\bar{x}}{\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} \\ -\frac{\sigma^2\bar{x}}{\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} & \frac{\sigma^2}{\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} \end{array} \right]$
How to derive variance-covariance matrix of coefficients in linear regression
Maximum likelihood solution: $ \mathcal{L}(\beta_0,\beta_1|\sigma,\epsilon_1,\ldots,\epsilon_n) = \prod\limits_{i=1}^{n}\frac{1}{\sigma\sqrt{2\pi}} \exp\!\left[-\frac{\epsilon_i^2}{2\sigma^2}\rig
How to derive variance-covariance matrix of coefficients in linear regression Maximum likelihood solution: $ \mathcal{L}(\beta_0,\beta_1|\sigma,\epsilon_1,\ldots,\epsilon_n) = \prod\limits_{i=1}^{n}\frac{1}{\sigma\sqrt{2\pi}} \exp\!\left[-\frac{\epsilon_i^2}{2\sigma^2}\right] \mbox{, where } \epsilon_i = \beta_0 + \beta_1 x_i - y_i$ $ \mathcal{LL}(\beta_0,\beta_1|\sigma,x_1,y_1,\ldots,x_n,y_n) = \sum\limits_{i=1}^{n}\ln\!\left[\frac{1}{\sigma\sqrt{2\pi}}\right] - \frac{(\beta_0 + \beta_1 x_i - y_i)^2}{2\sigma^2}$ Estimating the covariance matrix of the regression coefficients from the Fisher Information: $ \left[ \begin{array}{cc} s[\beta_0]^2 & s[\beta_0,\beta_1] \\ s[\beta_0,\beta_1] & s[\beta_1]^2 \\ \end{array} \right] = -\mathcal{H}^{-1} = -\left[ \begin{array}{cc} \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_0^2}} & \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_0}\partial{\beta_1}} \\ \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_0}\partial{\beta_1}} & \frac{\partial^2{\mathcal{LL}}}{\partial{\beta_1^2}} \end{array} \right]^{-1} \\ = -\frac{1}{\sigma^2} \left[ \begin{array}{cc} n & \sum_{i=1}^{n}x_i \\ \sum_{i=1}^{n}x_i & \sum_{i=1}^{n}x_i^2 \end{array} \right]^{-1} = \left[ \begin{array}{cc} \frac{\sigma^2\sum_{i=1}^{n}x_i^2}{n\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} & -\frac{\sigma^2\bar{x}}{\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} \\ -\frac{\sigma^2\bar{x}}{\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} & \frac{\sigma^2}{\sum_{i=1}^{n}(x_i^2-\bar{x}^2)} \end{array} \right]$
How to derive variance-covariance matrix of coefficients in linear regression Maximum likelihood solution: $ \mathcal{L}(\beta_0,\beta_1|\sigma,\epsilon_1,\ldots,\epsilon_n) = \prod\limits_{i=1}^{n}\frac{1}{\sigma\sqrt{2\pi}} \exp\!\left[-\frac{\epsilon_i^2}{2\sigma^2}\rig
3,726
How to derive variance-covariance matrix of coefficients in linear regression
It appears that $\beta_0 \beta_1$ are the predicted values (expected values). They make the switch between $E(b_0)=\beta_0$ and $E(b_1)=\beta_1$.
How to derive variance-covariance matrix of coefficients in linear regression
It appears that $\beta_0 \beta_1$ are the predicted values (expected values). They make the switch between $E(b_0)=\beta_0$ and $E(b_1)=\beta_1$.
How to derive variance-covariance matrix of coefficients in linear regression It appears that $\beta_0 \beta_1$ are the predicted values (expected values). They make the switch between $E(b_0)=\beta_0$ and $E(b_1)=\beta_1$.
How to derive variance-covariance matrix of coefficients in linear regression It appears that $\beta_0 \beta_1$ are the predicted values (expected values). They make the switch between $E(b_0)=\beta_0$ and $E(b_1)=\beta_1$.
3,727
How do I test that two continuous variables are independent?
(Answer partially updated in 2023.) This is a very hard problem in general, though your variables are apparently only 1d so that helps. Of course, the first step (when possible) should be to plot the data and see if anything pops out at you; you're in 2d so this should be easy. Here are a few approaches that work in $\mathbb{R}^d$ or even more general settings, to match the general title of the question. One general category is, related to the suggestion here, to estimate the mutual information: Estimate mutual information via entropies, as mentioned. In low dimensions with sufficient samples, histograms / KDE / nearest-neighbour estimators should work okay, but expect them to behave very poorly as the dimension increases. In particular, the following simple estimator has finite-sample bounds (compared to most approaches' asymptotic-only properties): Sricharan, Raich, and Hero. Empirical estimation of entropy functionals with confidence. arXiv:1012.4188 [math.ST] Similar direct estimators of mutual information, e.g. the following based on nearest neighbours: Pál, Póczos, and Svepesári. Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs, NeurIPS 2010. Variational estimators of mutual information, based on optimizing some function parameterized typically as a neural network; this is probably the "default" modern approach in high dimensions. The following paper gives a nice overview of the relationship between various estimators. Be aware, however, that these approaches are highly dependent on the neural network class and optimization scheme, and can have particularly surprising behaviour in their bias/variance tradeoffs. Poole, Ozair, van den Oord, Alemi, and Tucker. On Variational Bounds of Mutual Information, ICML 2019. There are also other approaches, based on measures other than the mutual information. The Schweizer-Wolff approach is a classic one based on copula transformations, and so is invariant to monotone increasing transformations. I'm not very familiar with this one, but I think it's computationally simpler but also maybe less powerful than most of the other approaches here. (I vaguely expect it can be framed as a special case of some of the other approaches but haven't really thought about it.) Schweizer and Wolff, On Nonparametric Measures of Dependence for Random Variables, Annals of Statistics 1981. The Hilbert-Schmidt independence criterion (HSIC): a kernel (in the sense of RKHS, not KDE)-based approach, based on measuring the norm of $\operatorname{Cov}(\phi(X), \psi(Y))$ for kernel features $\phi$ and $\psi$. In fact, the HSIC with kernels defined by a deep network is related to one of the more common variational estimators, InfoNCE; see discussion here. Gretton, Bousqet, Smola, and Schölkopf, Measuring Statistical Independence with Hilbert-Schmidt Norms, Algorithmic Learning Theory 2005. Statisticians are probably more familiar with the distance covariance/correlation as mentioned here previously; this is in fact a special case of the HSIC with a particular choice of kernel, but that choice is maybe often a better kernel choice than the default Gaussian kernel typically used for HSIC. Székely, Rizzo, and Bakirov, Measuring and testing dependence by correlation of distances, Annals of Statistics 2007.
How do I test that two continuous variables are independent?
(Answer partially updated in 2023.) This is a very hard problem in general, though your variables are apparently only 1d so that helps. Of course, the first step (when possible) should be to plot the
How do I test that two continuous variables are independent? (Answer partially updated in 2023.) This is a very hard problem in general, though your variables are apparently only 1d so that helps. Of course, the first step (when possible) should be to plot the data and see if anything pops out at you; you're in 2d so this should be easy. Here are a few approaches that work in $\mathbb{R}^d$ or even more general settings, to match the general title of the question. One general category is, related to the suggestion here, to estimate the mutual information: Estimate mutual information via entropies, as mentioned. In low dimensions with sufficient samples, histograms / KDE / nearest-neighbour estimators should work okay, but expect them to behave very poorly as the dimension increases. In particular, the following simple estimator has finite-sample bounds (compared to most approaches' asymptotic-only properties): Sricharan, Raich, and Hero. Empirical estimation of entropy functionals with confidence. arXiv:1012.4188 [math.ST] Similar direct estimators of mutual information, e.g. the following based on nearest neighbours: Pál, Póczos, and Svepesári. Estimation of Rényi Entropy and Mutual Information Based on Generalized Nearest-Neighbor Graphs, NeurIPS 2010. Variational estimators of mutual information, based on optimizing some function parameterized typically as a neural network; this is probably the "default" modern approach in high dimensions. The following paper gives a nice overview of the relationship between various estimators. Be aware, however, that these approaches are highly dependent on the neural network class and optimization scheme, and can have particularly surprising behaviour in their bias/variance tradeoffs. Poole, Ozair, van den Oord, Alemi, and Tucker. On Variational Bounds of Mutual Information, ICML 2019. There are also other approaches, based on measures other than the mutual information. The Schweizer-Wolff approach is a classic one based on copula transformations, and so is invariant to monotone increasing transformations. I'm not very familiar with this one, but I think it's computationally simpler but also maybe less powerful than most of the other approaches here. (I vaguely expect it can be framed as a special case of some of the other approaches but haven't really thought about it.) Schweizer and Wolff, On Nonparametric Measures of Dependence for Random Variables, Annals of Statistics 1981. The Hilbert-Schmidt independence criterion (HSIC): a kernel (in the sense of RKHS, not KDE)-based approach, based on measuring the norm of $\operatorname{Cov}(\phi(X), \psi(Y))$ for kernel features $\phi$ and $\psi$. In fact, the HSIC with kernels defined by a deep network is related to one of the more common variational estimators, InfoNCE; see discussion here. Gretton, Bousqet, Smola, and Schölkopf, Measuring Statistical Independence with Hilbert-Schmidt Norms, Algorithmic Learning Theory 2005. Statisticians are probably more familiar with the distance covariance/correlation as mentioned here previously; this is in fact a special case of the HSIC with a particular choice of kernel, but that choice is maybe often a better kernel choice than the default Gaussian kernel typically used for HSIC. Székely, Rizzo, and Bakirov, Measuring and testing dependence by correlation of distances, Annals of Statistics 2007.
How do I test that two continuous variables are independent? (Answer partially updated in 2023.) This is a very hard problem in general, though your variables are apparently only 1d so that helps. Of course, the first step (when possible) should be to plot the
3,728
How do I test that two continuous variables are independent?
Hoeffding developed a general nonparametric test for the independence of two continuous variables using joint ranks to test $H_{0}: H(x,y) = F(x)G(y)$. This 1948 test is implemented in the R Hmisc package's hoeffd function.
How do I test that two continuous variables are independent?
Hoeffding developed a general nonparametric test for the independence of two continuous variables using joint ranks to test $H_{0}: H(x,y) = F(x)G(y)$. This 1948 test is implemented in the R Hmisc pa
How do I test that two continuous variables are independent? Hoeffding developed a general nonparametric test for the independence of two continuous variables using joint ranks to test $H_{0}: H(x,y) = F(x)G(y)$. This 1948 test is implemented in the R Hmisc package's hoeffd function.
How do I test that two continuous variables are independent? Hoeffding developed a general nonparametric test for the independence of two continuous variables using joint ranks to test $H_{0}: H(x,y) = F(x)G(y)$. This 1948 test is implemented in the R Hmisc pa
3,729
How do I test that two continuous variables are independent?
How about this paper: http://arxiv.org/pdf/0803.4101.pdf "Measuring and testing dependence by correlation of distances". Székely and Bakirov always have interesting stuff. There is matlab code for the implementation: http://www.mathworks.com/matlabcentral/fileexchange/39905-distance-correlation If you find any other (simple to implement) test for independence let us know.
How do I test that two continuous variables are independent?
How about this paper: http://arxiv.org/pdf/0803.4101.pdf "Measuring and testing dependence by correlation of distances". Székely and Bakirov always have interesting stuff. There is matlab code for t
How do I test that two continuous variables are independent? How about this paper: http://arxiv.org/pdf/0803.4101.pdf "Measuring and testing dependence by correlation of distances". Székely and Bakirov always have interesting stuff. There is matlab code for the implementation: http://www.mathworks.com/matlabcentral/fileexchange/39905-distance-correlation If you find any other (simple to implement) test for independence let us know.
How do I test that two continuous variables are independent? How about this paper: http://arxiv.org/pdf/0803.4101.pdf "Measuring and testing dependence by correlation of distances". Székely and Bakirov always have interesting stuff. There is matlab code for t
3,730
How do I test that two continuous variables are independent?
The link between Distance Covariance and kernel tests (based on the Hilbert-Schmidt independence criterion) is given in the paper: Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K., Equivalence of distance-based and RKHS-based statistics in hypothesis testing, Annals of Statistics, 41 (5), pp.2263-2702, 2013 It's shown that distance covariance is a special case of the kernel statistic, for a particular family of kernels. If you're intent on using mutual information, a test based on a binned estimate of the MI is: Gretton, A. and Gyorfi, L., Consistent Nonparametric Tests of Independence, Journal of Machine Learning Research, 11 , pp.1391--1423, 2010. If you're interested in getting the best test power, you're better off using the kernel tests, rather than binning and mutual information. That said, given your variables are univariate, classical nonparametric independence tests like Hoeffding's are probably fine.
How do I test that two continuous variables are independent?
The link between Distance Covariance and kernel tests (based on the Hilbert-Schmidt independence criterion) is given in the paper: Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K., Equ
How do I test that two continuous variables are independent? The link between Distance Covariance and kernel tests (based on the Hilbert-Schmidt independence criterion) is given in the paper: Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K., Equivalence of distance-based and RKHS-based statistics in hypothesis testing, Annals of Statistics, 41 (5), pp.2263-2702, 2013 It's shown that distance covariance is a special case of the kernel statistic, for a particular family of kernels. If you're intent on using mutual information, a test based on a binned estimate of the MI is: Gretton, A. and Gyorfi, L., Consistent Nonparametric Tests of Independence, Journal of Machine Learning Research, 11 , pp.1391--1423, 2010. If you're interested in getting the best test power, you're better off using the kernel tests, rather than binning and mutual information. That said, given your variables are univariate, classical nonparametric independence tests like Hoeffding's are probably fine.
How do I test that two continuous variables are independent? The link between Distance Covariance and kernel tests (based on the Hilbert-Schmidt independence criterion) is given in the paper: Sejdinovic, D., Sriperumbudur, B., Gretton, A., and Fukumizu, K., Equ
3,731
How do I test that two continuous variables are independent?
Rarely (never?) in statistics can you demonstrate that your sample statistic = a point value. You can test against point values and either exclude them or not exclude them. But the nature of statistics is that it is about examining variable data. Because there is always variance then there will necessarily be no way to know that something is exactly not related, normal, gaussian, etc. You can only know a range of values for it. You could know if a value is excluded from the range of plausible values. For example, it's easy to exclude no relationship and give range of values for how big the relationship is. Therefore, trying to demonstrate no relationship, essentially the point value of relationship = 0 is not going to meet with success. If you have a range of measures of relationship that are acceptable as approximately 0. Then it would be possible to devise a test. Assuming that you can accept that limitation it would be helpful to people trying to assist you to provide a scatterplot with a lowess curve. Since you're looking for R solutions try: scatter.smooth(x, y) Based on the limited information you've given so far I think a generalized additive model might be the best thing for testing non-independence. If you plot that with CI's around the predicted values you may be able to make statements about a belief of independence. Check out gam in the mgcv package. The help is quite good and there is assistance here regarding the CI.
How do I test that two continuous variables are independent?
Rarely (never?) in statistics can you demonstrate that your sample statistic = a point value. You can test against point values and either exclude them or not exclude them. But the nature of statistic
How do I test that two continuous variables are independent? Rarely (never?) in statistics can you demonstrate that your sample statistic = a point value. You can test against point values and either exclude them or not exclude them. But the nature of statistics is that it is about examining variable data. Because there is always variance then there will necessarily be no way to know that something is exactly not related, normal, gaussian, etc. You can only know a range of values for it. You could know if a value is excluded from the range of plausible values. For example, it's easy to exclude no relationship and give range of values for how big the relationship is. Therefore, trying to demonstrate no relationship, essentially the point value of relationship = 0 is not going to meet with success. If you have a range of measures of relationship that are acceptable as approximately 0. Then it would be possible to devise a test. Assuming that you can accept that limitation it would be helpful to people trying to assist you to provide a scatterplot with a lowess curve. Since you're looking for R solutions try: scatter.smooth(x, y) Based on the limited information you've given so far I think a generalized additive model might be the best thing for testing non-independence. If you plot that with CI's around the predicted values you may be able to make statements about a belief of independence. Check out gam in the mgcv package. The help is quite good and there is assistance here regarding the CI.
How do I test that two continuous variables are independent? Rarely (never?) in statistics can you demonstrate that your sample statistic = a point value. You can test against point values and either exclude them or not exclude them. But the nature of statistic
3,732
How do I test that two continuous variables are independent?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It may be interesting ... Garcia, J. E.; Gonzalez-Lopez, V. A. (2014) Independence tests for continuous random variables based on the longest increasing subsequence. Journal of Multivariate Analysis, v. 127 p. 126-146. http://www.sciencedirect.com/science/article/pii/S0047259X14000335
How do I test that two continuous variables are independent?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How do I test that two continuous variables are independent? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It may be interesting ... Garcia, J. E.; Gonzalez-Lopez, V. A. (2014) Independence tests for continuous random variables based on the longest increasing subsequence. Journal of Multivariate Analysis, v. 127 p. 126-146. http://www.sciencedirect.com/science/article/pii/S0047259X14000335
How do I test that two continuous variables are independent? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
3,733
How do I test that two continuous variables are independent?
Yet another approach is Constrained Covariance: basically, for a "sufficiently rich" function class $G$, Constrained Covariance of two random variables $X$ and $Y$ is $$ \text{CoCo}(X,Y)=\sup_{g_1,g_2\in G} \text{corr}(g_1(X),g_2(Y)) $$ (possibly related to another answer) As for the computational aspect, please see Safe, Anytime-Valid Inference (SAVI) and Game-theoretic Statistics (I am omitting the details because they are being published now)
How do I test that two continuous variables are independent?
Yet another approach is Constrained Covariance: basically, for a "sufficiently rich" function class $G$, Constrained Covariance of two random variables $X$ and $Y$ is $$ \text{CoCo}(X,Y)=\sup_{g_1,g_2
How do I test that two continuous variables are independent? Yet another approach is Constrained Covariance: basically, for a "sufficiently rich" function class $G$, Constrained Covariance of two random variables $X$ and $Y$ is $$ \text{CoCo}(X,Y)=\sup_{g_1,g_2\in G} \text{corr}(g_1(X),g_2(Y)) $$ (possibly related to another answer) As for the computational aspect, please see Safe, Anytime-Valid Inference (SAVI) and Game-theoretic Statistics (I am omitting the details because they are being published now)
How do I test that two continuous variables are independent? Yet another approach is Constrained Covariance: basically, for a "sufficiently rich" function class $G$, Constrained Covariance of two random variables $X$ and $Y$ is $$ \text{CoCo}(X,Y)=\sup_{g_1,g_2
3,734
A more definitive discussion of variable selection
Andrew Gelman is definitely a respected name in the statistical world. His principles closely align with some of the causal modeling research that has been done by other "big names" in the field. But I think given your interest in clinical research, you should be consulting other sources. I am using the word "causal" loosely (as do others) because there is a fine line we must draw between performing "causal inference" from observational data, and asserting causal relations between variables. We all agree RCTs are the main way of assessing causality. We rarely adjust for anything in such trials per the randomization assumption, with few exceptions (Senn, 2004). Observational studies have their importance and utility (Weiss, 1989) and the counterfactual based approach to making inference from observational data is accepted as a philosophically sound approach to doing so (Höfler, 2005). It often approximates very closely the use-efficacy measured in RCTs (Anglemyer, 2014). Therefore, I'll focus on studies from observational data. My point of contention with Gelman's recommendations is: all predictors in a model and their posited causal relationship between a single exposure of interest and a single outcome of interest should be specified apriori. Throwing in and excluding covariates based on their relationship between a set of main findings is actually inducing a special case of 'Munchausen's statistical grid' (Martin, 1984). Some journals (and the trend is catching on) will summarily reject any article which uses stepwise regression to identify a final model (Babyak, 2004), and I think the problem is seen in similar ways here. The rationale for inclusion and exclusion of covariates in a model is discussed in: Judea Pearl's Causality (Pearl, 2002). It is perhaps one of the best texts around for understanding the principles of statistical inference, regression, and multivariate adjustment. Also practically anything by Sanders and Greenland is illuminating, in particular their discussion on confounding which is regretfully omitted from this list of recommendations (Greenland et al. 1999). Specific covariates can be assigned labels based on a graphical relation with a causal model. Designations such as prognostic, confounder, or precision variables warrant inclusion as covariates in statistical models. Mediators, colliders, or variables beyond the causal pathway should be omitted. The definitions of these terms are made rigorous with plenty of examples in Causality. Given this little background I'll address the points one-by-one. This is generally a sound approach with one MAJOR caveat: these variables must NOT be mediators of the outcome. If, for instance, you are inspecting the relationship between smoking and physical fitness, and you adjust for lung function, that is attenuating the effect of smoking because it's direct impact on fitness is that of reducing lung function. This should NOT be confused with confounding where the third variable is causal of the predictor of interest AND the outcome of interest. Confounders must be included in models. Additionally, overadjustment can cause multiple forms of bias in analyses. Mediators and confounders are deemed as such NOT because of what is found in analyses, but because of what is BELIEVED by YOU as the subject-matter-expert (SME). If you have 20 observations per variable or fewer, or 20 observations per event in time-to-event or logistic analyses, you should consider conditional methods instead. This is an excellent power saving approach that is not so complicated as propensity score adjustment or SEM or factor analysis. I would definitely recommend doing this whenever possible. I disagree wholeheartedly. The point of adjusting for other variables in analyses is to create strata for which comparisons are possible. Misspecifying confounder relations does not generally lead to overbiased analyses, so residual confounding from omitted interaction terms is, in my experience, not a big issue. You might, however, consider interaction terms between the predictor of interest and other variables as a post-hoc analysis. This is a hypothesis generating procedure that is meant to refine any possible findings (or lack thereof) as a. potentially belonging to a subgroup or b. involving a mechanistic interaction between two environmental and/or genetic factors. I also disagree with this wholeheartedly. It does not coincide with the confirmatory analysis based approach to regression. You are the SME. The analyses should be informed by the QUESTION and not the DATA. State with confidence what you believe to be happening, based on a pictoral depiction of the causal model (using a DAG and related principles from Pearl et. al), then choose the predictors for your model of interest, fit, and discuss. Only as a secondary analysis should you consider this approach, even at all. The role of machine learning in all of this is highly debatable. In general, machine learning is focused on prediction and not inference which are distinct approaches to data analysis. You are right that the interpretation of effects from penalized regression are not easily interpreted for a non-statistical community, unlike estimates from an OLS, where 95% CIs and coefficient estimates provide a measure of association. The interpretation of the coefficient from an OLS model Y~X is straightforward: it is a slope, an expected difference in Y comparing groups differing by 1 unit in X. In a multivariate adjusted model Y~X1+X2 we modify this as a conditional slope: it is an expected difference in Y comparing groups differing by 1 unit in X1 who have the same value of X2. Geometrically, adjusting for X2 leads to distinct strata or "cross sections" of the three space where we compare X1 to Y, then we average up the findings over each of those strata. In R, the coplot function is very useful for visualizing such relations.
A more definitive discussion of variable selection
Andrew Gelman is definitely a respected name in the statistical world. His principles closely align with some of the causal modeling research that has been done by other "big names" in the field. But
A more definitive discussion of variable selection Andrew Gelman is definitely a respected name in the statistical world. His principles closely align with some of the causal modeling research that has been done by other "big names" in the field. But I think given your interest in clinical research, you should be consulting other sources. I am using the word "causal" loosely (as do others) because there is a fine line we must draw between performing "causal inference" from observational data, and asserting causal relations between variables. We all agree RCTs are the main way of assessing causality. We rarely adjust for anything in such trials per the randomization assumption, with few exceptions (Senn, 2004). Observational studies have their importance and utility (Weiss, 1989) and the counterfactual based approach to making inference from observational data is accepted as a philosophically sound approach to doing so (Höfler, 2005). It often approximates very closely the use-efficacy measured in RCTs (Anglemyer, 2014). Therefore, I'll focus on studies from observational data. My point of contention with Gelman's recommendations is: all predictors in a model and their posited causal relationship between a single exposure of interest and a single outcome of interest should be specified apriori. Throwing in and excluding covariates based on their relationship between a set of main findings is actually inducing a special case of 'Munchausen's statistical grid' (Martin, 1984). Some journals (and the trend is catching on) will summarily reject any article which uses stepwise regression to identify a final model (Babyak, 2004), and I think the problem is seen in similar ways here. The rationale for inclusion and exclusion of covariates in a model is discussed in: Judea Pearl's Causality (Pearl, 2002). It is perhaps one of the best texts around for understanding the principles of statistical inference, regression, and multivariate adjustment. Also practically anything by Sanders and Greenland is illuminating, in particular their discussion on confounding which is regretfully omitted from this list of recommendations (Greenland et al. 1999). Specific covariates can be assigned labels based on a graphical relation with a causal model. Designations such as prognostic, confounder, or precision variables warrant inclusion as covariates in statistical models. Mediators, colliders, or variables beyond the causal pathway should be omitted. The definitions of these terms are made rigorous with plenty of examples in Causality. Given this little background I'll address the points one-by-one. This is generally a sound approach with one MAJOR caveat: these variables must NOT be mediators of the outcome. If, for instance, you are inspecting the relationship between smoking and physical fitness, and you adjust for lung function, that is attenuating the effect of smoking because it's direct impact on fitness is that of reducing lung function. This should NOT be confused with confounding where the third variable is causal of the predictor of interest AND the outcome of interest. Confounders must be included in models. Additionally, overadjustment can cause multiple forms of bias in analyses. Mediators and confounders are deemed as such NOT because of what is found in analyses, but because of what is BELIEVED by YOU as the subject-matter-expert (SME). If you have 20 observations per variable or fewer, or 20 observations per event in time-to-event or logistic analyses, you should consider conditional methods instead. This is an excellent power saving approach that is not so complicated as propensity score adjustment or SEM or factor analysis. I would definitely recommend doing this whenever possible. I disagree wholeheartedly. The point of adjusting for other variables in analyses is to create strata for which comparisons are possible. Misspecifying confounder relations does not generally lead to overbiased analyses, so residual confounding from omitted interaction terms is, in my experience, not a big issue. You might, however, consider interaction terms between the predictor of interest and other variables as a post-hoc analysis. This is a hypothesis generating procedure that is meant to refine any possible findings (or lack thereof) as a. potentially belonging to a subgroup or b. involving a mechanistic interaction between two environmental and/or genetic factors. I also disagree with this wholeheartedly. It does not coincide with the confirmatory analysis based approach to regression. You are the SME. The analyses should be informed by the QUESTION and not the DATA. State with confidence what you believe to be happening, based on a pictoral depiction of the causal model (using a DAG and related principles from Pearl et. al), then choose the predictors for your model of interest, fit, and discuss. Only as a secondary analysis should you consider this approach, even at all. The role of machine learning in all of this is highly debatable. In general, machine learning is focused on prediction and not inference which are distinct approaches to data analysis. You are right that the interpretation of effects from penalized regression are not easily interpreted for a non-statistical community, unlike estimates from an OLS, where 95% CIs and coefficient estimates provide a measure of association. The interpretation of the coefficient from an OLS model Y~X is straightforward: it is a slope, an expected difference in Y comparing groups differing by 1 unit in X. In a multivariate adjusted model Y~X1+X2 we modify this as a conditional slope: it is an expected difference in Y comparing groups differing by 1 unit in X1 who have the same value of X2. Geometrically, adjusting for X2 leads to distinct strata or "cross sections" of the three space where we compare X1 to Y, then we average up the findings over each of those strata. In R, the coplot function is very useful for visualizing such relations.
A more definitive discussion of variable selection Andrew Gelman is definitely a respected name in the statistical world. His principles closely align with some of the causal modeling research that has been done by other "big names" in the field. But
3,735
A more definitive discussion of variable selection
This magnificent question and @AdamO's comprehensive answer are a prime example of how CV regularly renews my faith in humanity. I'll aim here mainly to offer some ways to appreciate that answer (and the OP's question) in a broader context. Firstly, I venture to assert that all reliable advice regarding statistical practice is cautionary in nature -- proscriptive rather than prescriptive. Gelman & Hill point #3, for example, while it reads superficially as advice to actively do something ("consider"), is really better understood as cautioning against failing to consider interactions with powerful effects. Understood intuitively as an appeal to the intuition connected with picking the most important terms in a (multivariate) Taylor series expansion, it seems unobjectionable to me. Secondly, while the OP is busy getting a better education than most PhD biostatisticians have (by following up AdamO's citations), OP might as well also pick up David A. Friedman's Statistical Models and Causal Inference [1], where a healthy challenge will be found to the presumption that regression should be our primary tool in clinical research. I recommend especially Chapter 3, "Statistical Models and Shoe Leather," which is also available in previously published form [2] here. (Don't let the name of the journal turn you off; the key lessons drawn are from John Snow's investigations on cholera. See also this answer, where these lessons are laid out in some detail.) Finally -- and perhaps this is really a corollary to Freedman -- it should be mentioned that the example 'conclusions' offered by the OP would actually belong in the Results section of the paper. It would be most healthy to consider as early as possible how the real Conclusions and Discussion sections of the paper would be worded, so as to be accessible to doctors, the media, and even to the increasing number of patients and their lay advocates who heroically labor to read the medical literature. Maintaining focus on that end point will usefully shape the technical work of the statistical analysis, and keep it grounded in the reality of the world it's aiming to describe, and the needs it's aiming to serve. Freedman, David, David Collier, Jasjeet Singh Sekhon, and Philip B. Stark. Statistical Models and Causal Inference: A Dialogue with the Social Sciences. Cambridge ; New York: Cambridge University Press, 2010. Freedman, David A. “Statistical Models and Shoe Leather.” Sociological Methodology 21 (1991): 291–313. doi:10.2307/270939.
A more definitive discussion of variable selection
This magnificent question and @AdamO's comprehensive answer are a prime example of how CV regularly renews my faith in humanity. I'll aim here mainly to offer some ways to appreciate that answer (and
A more definitive discussion of variable selection This magnificent question and @AdamO's comprehensive answer are a prime example of how CV regularly renews my faith in humanity. I'll aim here mainly to offer some ways to appreciate that answer (and the OP's question) in a broader context. Firstly, I venture to assert that all reliable advice regarding statistical practice is cautionary in nature -- proscriptive rather than prescriptive. Gelman & Hill point #3, for example, while it reads superficially as advice to actively do something ("consider"), is really better understood as cautioning against failing to consider interactions with powerful effects. Understood intuitively as an appeal to the intuition connected with picking the most important terms in a (multivariate) Taylor series expansion, it seems unobjectionable to me. Secondly, while the OP is busy getting a better education than most PhD biostatisticians have (by following up AdamO's citations), OP might as well also pick up David A. Friedman's Statistical Models and Causal Inference [1], where a healthy challenge will be found to the presumption that regression should be our primary tool in clinical research. I recommend especially Chapter 3, "Statistical Models and Shoe Leather," which is also available in previously published form [2] here. (Don't let the name of the journal turn you off; the key lessons drawn are from John Snow's investigations on cholera. See also this answer, where these lessons are laid out in some detail.) Finally -- and perhaps this is really a corollary to Freedman -- it should be mentioned that the example 'conclusions' offered by the OP would actually belong in the Results section of the paper. It would be most healthy to consider as early as possible how the real Conclusions and Discussion sections of the paper would be worded, so as to be accessible to doctors, the media, and even to the increasing number of patients and their lay advocates who heroically labor to read the medical literature. Maintaining focus on that end point will usefully shape the technical work of the statistical analysis, and keep it grounded in the reality of the world it's aiming to describe, and the needs it's aiming to serve. Freedman, David, David Collier, Jasjeet Singh Sekhon, and Philip B. Stark. Statistical Models and Causal Inference: A Dialogue with the Social Sciences. Cambridge ; New York: Cambridge University Press, 2010. Freedman, David A. “Statistical Models and Shoe Leather.” Sociological Methodology 21 (1991): 291–313. doi:10.2307/270939.
A more definitive discussion of variable selection This magnificent question and @AdamO's comprehensive answer are a prime example of how CV regularly renews my faith in humanity. I'll aim here mainly to offer some ways to appreciate that answer (and
3,736
Examples of Bayesian and frequentist approach giving different answers
This example is taken from here. (I even think I got this link from SO, but cannot find it anymore.) A coin has been tossed $n=14$ times, coming up heads $k=10$ times. If it is to be tossed twice more, would you bet on two heads? Assume you do not get to see the result of the first toss before the second toss (and also independently conditional on $\theta$), so that you cannot update your opinion on $\theta$ in between the two throws. By independence, $$f(y_{f,1}=\text{heads},y_{f,2}=\text{heads}|\theta)=f(y_{f,1}=\text{heads})f(y_{f,2}=\text{heads}|\theta)=\theta^2.$$ Then, the predictive distribution given a $\text{Beta}(\alpha_0,\beta_0)$-prior, becomes \begin{eqnarray*} f(y_{f,1}=\text{heads},y_{f,2}=\text{heads}|y)&=&\int f(y_{f,1}=\text{heads},y_{f,2}=\text{heads}|\theta)\pi(\theta|y)d\theta\notag\\ &=&\frac{\Gamma\left(\alpha _{0}+\beta_{0}+n\right)}{\Gamma\left(\alpha_{0}+k\right)\Gamma\left(\beta_{0}+n-k\right)}\int \theta^2\theta ^{\alpha _{0}+k-1}\left( 1-\theta \right) ^{\beta _{0}+n-k-1}d\theta\notag\\ &=&\frac{\Gamma\left(\alpha_{0}+\beta_{0}+n\right)}{\Gamma\left(\alpha_{0}+k\right)\Gamma\left(\beta_{0}+n-k\right)}\frac{\Gamma\left(\alpha_{0}+k+2\right)\Gamma\left(\beta_{0}+n-k\right)}{\Gamma\left(\alpha_{0}+\beta_{0}+n+2\right)}\notag\\ &=&\frac{(\alpha_{0}+k)\cdot(\alpha_{0}+k+1)}{(\alpha_{0}+\beta_{0}+n)\cdot(\alpha_{0}+\beta_{0}+n+1)} \end{eqnarray*} For a uniform prior (a $\text{Beta}(1, 1)$-prior), this gives roughly .485. Hence, you would likely not bet. Based on the MLE 10/14, you would calculate a probability of two heads of $(10/14)^2\approx.51$, such that betting would make sense.
Examples of Bayesian and frequentist approach giving different answers
This example is taken from here. (I even think I got this link from SO, but cannot find it anymore.) A coin has been tossed $n=14$ times, coming up heads $k=10$ times. If it is to be tossed twice more
Examples of Bayesian and frequentist approach giving different answers This example is taken from here. (I even think I got this link from SO, but cannot find it anymore.) A coin has been tossed $n=14$ times, coming up heads $k=10$ times. If it is to be tossed twice more, would you bet on two heads? Assume you do not get to see the result of the first toss before the second toss (and also independently conditional on $\theta$), so that you cannot update your opinion on $\theta$ in between the two throws. By independence, $$f(y_{f,1}=\text{heads},y_{f,2}=\text{heads}|\theta)=f(y_{f,1}=\text{heads})f(y_{f,2}=\text{heads}|\theta)=\theta^2.$$ Then, the predictive distribution given a $\text{Beta}(\alpha_0,\beta_0)$-prior, becomes \begin{eqnarray*} f(y_{f,1}=\text{heads},y_{f,2}=\text{heads}|y)&=&\int f(y_{f,1}=\text{heads},y_{f,2}=\text{heads}|\theta)\pi(\theta|y)d\theta\notag\\ &=&\frac{\Gamma\left(\alpha _{0}+\beta_{0}+n\right)}{\Gamma\left(\alpha_{0}+k\right)\Gamma\left(\beta_{0}+n-k\right)}\int \theta^2\theta ^{\alpha _{0}+k-1}\left( 1-\theta \right) ^{\beta _{0}+n-k-1}d\theta\notag\\ &=&\frac{\Gamma\left(\alpha_{0}+\beta_{0}+n\right)}{\Gamma\left(\alpha_{0}+k\right)\Gamma\left(\beta_{0}+n-k\right)}\frac{\Gamma\left(\alpha_{0}+k+2\right)\Gamma\left(\beta_{0}+n-k\right)}{\Gamma\left(\alpha_{0}+\beta_{0}+n+2\right)}\notag\\ &=&\frac{(\alpha_{0}+k)\cdot(\alpha_{0}+k+1)}{(\alpha_{0}+\beta_{0}+n)\cdot(\alpha_{0}+\beta_{0}+n+1)} \end{eqnarray*} For a uniform prior (a $\text{Beta}(1, 1)$-prior), this gives roughly .485. Hence, you would likely not bet. Based on the MLE 10/14, you would calculate a probability of two heads of $(10/14)^2\approx.51$, such that betting would make sense.
Examples of Bayesian and frequentist approach giving different answers This example is taken from here. (I even think I got this link from SO, but cannot find it anymore.) A coin has been tossed $n=14$ times, coming up heads $k=10$ times. If it is to be tossed twice more
3,737
Examples of Bayesian and frequentist approach giving different answers
See my question here, which mentions a paper by Edwin Jaynes that gives an example of a correctly constructed frequentist confidence interval, where there is sufficient information in the sample to know for certain that the true value of the statistic lies nowhere in the confidence interval (and thus the confidence interval is different from the Bayesian credible interval). However, the reason for this is the difference in the definition of a confidence interval and a credible interval, which in turn is a direct consequence of the difference in frequentist and Bayesian definitions of probability. If you ask a Bayesian to produce a Bayesian confidence (rather than credible) interval, then I suspect that there will always be a prior for which the intervals will be the same, so the differences are down to choice of prior. Whether frequentist or Bayesian methods are appropriate depends on the question you want to pose, and at the end of the day it is the difference in philosophies that decides the answer (provided that the computational and analytic effort required is not a consideration). Being somewhat tongue in cheek, it could be argued that a long run frequency is a perfectly reasonable way of determining the relative plausibility of a proposition, in which case frequentist statistics is a slightly odd subset of subjective Bayesianism - so any question a frequentist can answer a subjectivist Bayesian can also answer in the same way, or in some other way should they choose different priors. ;o)
Examples of Bayesian and frequentist approach giving different answers
See my question here, which mentions a paper by Edwin Jaynes that gives an example of a correctly constructed frequentist confidence interval, where there is sufficient information in the sample to kn
Examples of Bayesian and frequentist approach giving different answers See my question here, which mentions a paper by Edwin Jaynes that gives an example of a correctly constructed frequentist confidence interval, where there is sufficient information in the sample to know for certain that the true value of the statistic lies nowhere in the confidence interval (and thus the confidence interval is different from the Bayesian credible interval). However, the reason for this is the difference in the definition of a confidence interval and a credible interval, which in turn is a direct consequence of the difference in frequentist and Bayesian definitions of probability. If you ask a Bayesian to produce a Bayesian confidence (rather than credible) interval, then I suspect that there will always be a prior for which the intervals will be the same, so the differences are down to choice of prior. Whether frequentist or Bayesian methods are appropriate depends on the question you want to pose, and at the end of the day it is the difference in philosophies that decides the answer (provided that the computational and analytic effort required is not a consideration). Being somewhat tongue in cheek, it could be argued that a long run frequency is a perfectly reasonable way of determining the relative plausibility of a proposition, in which case frequentist statistics is a slightly odd subset of subjective Bayesianism - so any question a frequentist can answer a subjectivist Bayesian can also answer in the same way, or in some other way should they choose different priors. ;o)
Examples of Bayesian and frequentist approach giving different answers See my question here, which mentions a paper by Edwin Jaynes that gives an example of a correctly constructed frequentist confidence interval, where there is sufficient information in the sample to kn
3,738
Examples of Bayesian and frequentist approach giving different answers
I believe this paper provides a more purposeful sense of the trade-offs in actual applications between the two. Part of this might be due to my preference for intervals rather than tests. Gustafson, P. and Greenland, S. (2009). Interval Estimation for Messy Observational Data. Statistical Science 24: 328–342. With regard to intervals, it may be worthwhile to keep in mind that frequentist confidence intervals require/demand uniform coverage (exactly or at least great than x% for each and every parameter value that does not have zero probability) and if they don't have that - they arn't really confidence intervals. (Some would go further and say that they must also rule out relevant subsets that change the coverage.) Bayesian coverage is usually defined by relaxing that to "on average coverage" given the assumed prior turns out to be exactly correct. Gustafson and Greenland (2009) call these omnipotent priors and consider falliable ones to provide a better assessment.
Examples of Bayesian and frequentist approach giving different answers
I believe this paper provides a more purposeful sense of the trade-offs in actual applications between the two. Part of this might be due to my preference for intervals rather than tests. Gustafson,
Examples of Bayesian and frequentist approach giving different answers I believe this paper provides a more purposeful sense of the trade-offs in actual applications between the two. Part of this might be due to my preference for intervals rather than tests. Gustafson, P. and Greenland, S. (2009). Interval Estimation for Messy Observational Data. Statistical Science 24: 328–342. With regard to intervals, it may be worthwhile to keep in mind that frequentist confidence intervals require/demand uniform coverage (exactly or at least great than x% for each and every parameter value that does not have zero probability) and if they don't have that - they arn't really confidence intervals. (Some would go further and say that they must also rule out relevant subsets that change the coverage.) Bayesian coverage is usually defined by relaxing that to "on average coverage" given the assumed prior turns out to be exactly correct. Gustafson and Greenland (2009) call these omnipotent priors and consider falliable ones to provide a better assessment.
Examples of Bayesian and frequentist approach giving different answers I believe this paper provides a more purposeful sense of the trade-offs in actual applications between the two. Part of this might be due to my preference for intervals rather than tests. Gustafson,
3,739
Examples of Bayesian and frequentist approach giving different answers
If someone were to pose a question that has both a frequentist and Bayesian answer, I suspect that someone else would be able to identify an ambiguity in the question, thus making it not "well formed". In other words, if you need a frequentist answer, use frequentist methods. If you need a Bayesian answer, use Bayesian methods. If you don't know which you need, then you may not have defined the question unambiguously. However, in the real world there are often several different ways to define a problem or ask a question. Sometimes it is not clear which of those ways is preferable. This is especially common when one's client is statistically naive. Other times one question is much more difficult to answer than another. In those cases one often goes with the easiest while trying to make sure his clients agree with precisely what question he is asking or what problem he is solving.
Examples of Bayesian and frequentist approach giving different answers
If someone were to pose a question that has both a frequentist and Bayesian answer, I suspect that someone else would be able to identify an ambiguity in the question, thus making it not "well formed"
Examples of Bayesian and frequentist approach giving different answers If someone were to pose a question that has both a frequentist and Bayesian answer, I suspect that someone else would be able to identify an ambiguity in the question, thus making it not "well formed". In other words, if you need a frequentist answer, use frequentist methods. If you need a Bayesian answer, use Bayesian methods. If you don't know which you need, then you may not have defined the question unambiguously. However, in the real world there are often several different ways to define a problem or ask a question. Sometimes it is not clear which of those ways is preferable. This is especially common when one's client is statistically naive. Other times one question is much more difficult to answer than another. In those cases one often goes with the easiest while trying to make sure his clients agree with precisely what question he is asking or what problem he is solving.
Examples of Bayesian and frequentist approach giving different answers If someone were to pose a question that has both a frequentist and Bayesian answer, I suspect that someone else would be able to identify an ambiguity in the question, thus making it not "well formed"
3,740
Examples of Bayesian and frequentist approach giving different answers
I recommend looking at Exercise 3.15 of the freely-available textbook Information Theory, Inference and Learning Algorithms by MacKay. When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. 'It looks very suspicious to me', said Barry Blight, a statistics lecturer at the London School of Economics. `If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%'. But do these data give evidence that the coin is biased rather than fair? The example is worked out in detail on pp. 63-64 of the textbook. The conclusion is that the $p$-value is $0.07$, but the Bayesian approach gives varying levels of support for either hypothesis, depending on the prior. This ranges from a recommended answer of no evidence that the coin is biased (when a flat prior is used) to an answer of no more than $6:1$ against the null hypothesis of unbiasedness, in the case that an artificially extreme prior is used.
Examples of Bayesian and frequentist approach giving different answers
I recommend looking at Exercise 3.15 of the freely-available textbook Information Theory, Inference and Learning Algorithms by MacKay. When spun on edge 250 times, a Belgian one-euro coin came up hea
Examples of Bayesian and frequentist approach giving different answers I recommend looking at Exercise 3.15 of the freely-available textbook Information Theory, Inference and Learning Algorithms by MacKay. When spun on edge 250 times, a Belgian one-euro coin came up heads 140 times and tails 110. 'It looks very suspicious to me', said Barry Blight, a statistics lecturer at the London School of Economics. `If the coin were unbiased the chance of getting a result as extreme as that would be less than 7%'. But do these data give evidence that the coin is biased rather than fair? The example is worked out in detail on pp. 63-64 of the textbook. The conclusion is that the $p$-value is $0.07$, but the Bayesian approach gives varying levels of support for either hypothesis, depending on the prior. This ranges from a recommended answer of no evidence that the coin is biased (when a flat prior is used) to an answer of no more than $6:1$ against the null hypothesis of unbiasedness, in the case that an artificially extreme prior is used.
Examples of Bayesian and frequentist approach giving different answers I recommend looking at Exercise 3.15 of the freely-available textbook Information Theory, Inference and Learning Algorithms by MacKay. When spun on edge 250 times, a Belgian one-euro coin came up hea
3,741
Examples of Bayesian and frequentist approach giving different answers
An important area where the two approaches will yield conflicting assessments is the context of multiplicity. Since p-values involve the probability of getting more extreme results than the results observed if a null hypothesis is true, having more looks at the data will increase the p-value. For Bayes on the other hand, more looks at the data just result in more rapid updating of evidence, and previous evidence assessments are now obsolete and can be completely ignored. Bayesian measures are study time-respecting while frequentist $\alpha$ probability is non-directional. Two classes of examples are (1) sequential testing where frequentist approaches are well developed but are conservative and (2) situations in which there is no way to use a frequentist approach to even address the problem of interest. In a sequential study, using Bayes one may look at the data infinitely often without changing the definition or reliability of posterior probabilities. The frequentist approach becomes increasingly conservative as the number of looks increases. In a study in which there are multiple endpoints, the frequentist approach has a great deal of difficulty even putting together an overall evidentiary measure, while the Bayesian approach has no difficulty. For example suppose that one is developing a migraine headache drug and the outcomes are sleep problems, pain, nausea, light sensitivity, and sound sensitivity. One may reasonably claim the drug to be a success if there is a high posterior probability that the drug improved any 3 of the 5 patient outcomes. The only frequentist methods that have been proposed are closed testing procedures that seek evidence for any or all of the 5 endpoints being benefited by drug. Another major category where Bayes disagrees with frequentist is the frequent case where the frequentist result in incorrect from a frequentist standpoint. This occurs quite generally when the log likelihood has a very non-Gaussian shape, for example with binary logistic regression with an imbalanced Y. While the uncommonly used profile likelihood interval yields fairly accurate confidence interval coverage probabilities, the most commonly used approaches such as the Wald method and various bootstrap intervals do not. You will see inaccurate tail non-coverage probabilities in at least one of the two tails. Bayesian highest posterior density or credible intervals are exact on the other hand, for all sample sizes.
Examples of Bayesian and frequentist approach giving different answers
An important area where the two approaches will yield conflicting assessments is the context of multiplicity. Since p-values involve the probability of getting more extreme results than the results o
Examples of Bayesian and frequentist approach giving different answers An important area where the two approaches will yield conflicting assessments is the context of multiplicity. Since p-values involve the probability of getting more extreme results than the results observed if a null hypothesis is true, having more looks at the data will increase the p-value. For Bayes on the other hand, more looks at the data just result in more rapid updating of evidence, and previous evidence assessments are now obsolete and can be completely ignored. Bayesian measures are study time-respecting while frequentist $\alpha$ probability is non-directional. Two classes of examples are (1) sequential testing where frequentist approaches are well developed but are conservative and (2) situations in which there is no way to use a frequentist approach to even address the problem of interest. In a sequential study, using Bayes one may look at the data infinitely often without changing the definition or reliability of posterior probabilities. The frequentist approach becomes increasingly conservative as the number of looks increases. In a study in which there are multiple endpoints, the frequentist approach has a great deal of difficulty even putting together an overall evidentiary measure, while the Bayesian approach has no difficulty. For example suppose that one is developing a migraine headache drug and the outcomes are sleep problems, pain, nausea, light sensitivity, and sound sensitivity. One may reasonably claim the drug to be a success if there is a high posterior probability that the drug improved any 3 of the 5 patient outcomes. The only frequentist methods that have been proposed are closed testing procedures that seek evidence for any or all of the 5 endpoints being benefited by drug. Another major category where Bayes disagrees with frequentist is the frequent case where the frequentist result in incorrect from a frequentist standpoint. This occurs quite generally when the log likelihood has a very non-Gaussian shape, for example with binary logistic regression with an imbalanced Y. While the uncommonly used profile likelihood interval yields fairly accurate confidence interval coverage probabilities, the most commonly used approaches such as the Wald method and various bootstrap intervals do not. You will see inaccurate tail non-coverage probabilities in at least one of the two tails. Bayesian highest posterior density or credible intervals are exact on the other hand, for all sample sizes.
Examples of Bayesian and frequentist approach giving different answers An important area where the two approaches will yield conflicting assessments is the context of multiplicity. Since p-values involve the probability of getting more extreme results than the results o
3,742
Examples of Bayesian and frequentist approach giving different answers
The answer provided by Christoph Hanck compares a Bayesian prediction interval for a future experimental result with a frequentist point estimate of a population-level parameter. A more appropriate comparison would be to compare a Bayesian prediction interval with a frequentist prediction interval. In my examples below I compare Bayesian posterior intervals with frequentist confidence intervals. Ultimately, the choice of using a Bayesian or frequentist approach comes down to how you choose to define probability. See my post here on interpretation and why one would choose a frequentist interpretation, Bayesian vs frequentist interpretations of probability. The following is taken from my manuscript on confidence distributions - Johnson, Geoffrey S. "Decision Making in Drug Development via Confidence Distributions" Researchgate.net (2021). In short, objective Bayesian and frequentist inference will differ the most when the data distribution is skewed, the sample size is small, and inference is performed near the boundary of the parameter space. Below I will begin with an example where Bayesian and frequentist inference agree perfectly. I will then provide another example where they differ. Under $H_0$: $\theta=\theta_0$ the likelihood ratio test statistic -2log$\lambda(\boldsymbol{X},\theta_0)$ follows an asymptotic $\chi^2_1$ distribution (Wilks 1938). If an upper-tailed test is inverted for all values of $\theta$ in the parameter space, the resulting distribution function of one-sided p-values is called a confidence distribution function. That is, the one-sided p-value testing $H_0$: $\theta\le\theta_0$, \begin{eqnarray}\label{eq} H(\theta_0,\boldsymbol{x})= \left\{ \begin{array}{cc} \big[1-F_{\chi^2_1}\big(-2\text{log}\lambda(\boldsymbol{x},\theta_0)\big)\big]/2 & \text{if } \theta_0 \le \hat{\theta}_{mle} \\ & \\ \big[1+F_{\chi^2_1}\big(-2\text{log}\lambda(\boldsymbol{x},\theta_0)\big)\big]/2 & \text{if } \theta_0 > \hat{\theta}_{mle}, \end{array} \right. \end{eqnarray} as a function of $\theta_0$ and the observed data $\boldsymbol{x}$ is the corresponding confidence distribution function, where $\hat{\theta}_{mle}$ is the maximum likelihood estimate of $\theta$ and $F_{\chi^2_1}(\cdot)$ is the cumulative distribution function of a $\chi^2_1$ random variable. Typically the naught subscript is dropped and $\boldsymbol{x}$ is suppressed to emphasize that $H(\theta)$ is a function over the entire parameter space. This recipe of viewing the p-value as a function of $\theta$ given the data produces a confidence distribution function for any hypothesis test. The confidence distribution can also be depicted by its density defined as $h(\theta)=dH(\theta)/d\theta$. Consider the setting where $X_1,...,X_n\sim\text{Exp}(\theta)$ with likelihood function $L(\theta)=\theta^{-n} e^{-\sum{x_i}/\theta}$. Then $supL(\theta)$ yields $\hat{\theta}_{mle}=\bar{x}$ as the maximum likelihood estimate for $\theta$, the likelihood ratio test statistic is $-2\text{log}\lambda({\boldsymbol{x},\theta_0})\equiv-2\text{log}\big(L({\theta}_0)/L(\hat{\theta}_{mle}) \big)$, and the corresponding confidence distribution function is defined as above. The histogram above, supported by $\bar{x}$, depicts the plug-in estimated sampling distribution for the maximum likelihood estimator (MLE) of the mean for exponentially distributed data with $n=5$ and $\hat{\theta}_{mle}=1.5$. Replacing the unknown fixed true $\theta$ with $\hat{\theta}_{mle}=1.5$, this displays the estimated sampling behavior of the MLE for all other replicated experiments, a $\text{Gamma}(5,0.3)$ distribution. The Bayesian posterior depicted by the thin blue curve resulting from a vague conjugate prior or an improper 1/θ prior is a transformation of the likelihood and is supported on the parameter space, an $\text{Inverse Gamma}(5,7.5)$ distribution. The bold black curve is also data dependent and supported on the parameter space, but represents confidence intervals of all levels from inverting the likelihood ratio test. It is a transformation of the sampling behavior of the test statistic under the null onto the parameter space, a ``distribution" of p-values. Each value of $\theta$ takes its turn playing the role of null hypothesis and hypothesis testing (akin to proof by contradiction) is used to infer the unknown fixed true $\theta$. The area under this curve to the right of the reference line is the p-value or significance level when testing the hypothesis $H_0$: $\theta \ge 2.35$. This probability forms the level of confidence that $\theta$ is greater than or equal to 2.35. Similarly, the area to the left of the reference line is the p-value when testing the hypothesis $H_0$: $\theta \le 2.35$. One can also identify the two-sided equal-tailed $100(1-\alpha)\%$ confidence interval by finding the complement of those values of $\theta$ in each tail with $\alpha$/2 significance. The dotted curve shows the exact likelihood ratio confidence density formed by noting that $\bar{X}\sim$ Gamma$(n,\theta/n)$ and inverting its cumulative distribution function. This confidence density agrees perfectly with the posterior distribution. A confidence density similar to that based on the likelihood ratio test can be produced by inverting a Wald test with a log link. When a normalized likelihood approaches a normal distribution with increasing sample size, Bayesian and frequentist inference are asymptotically equivalent. In the example above the posterior mean agrees with the maximum likelihood estimate. This is not always the case. Take, for example, estimation and inference on a non-linear monotonic transformation of $\theta$. For an example where Bayesian and frequentist inference differ, consider the setting where $X_1,...,X_n\sim\text{Bernoulli}(\theta)$ with likelihood function $L(\theta)=\theta^{\sum x_i}(1-\theta)^{n-\sum x_i}$. The conjugate Bayesian posterior is a $\text{Beta}(a+\sum x_i, b+n-\sum x_i)$ where $a$ and $b$ are the prior parameters. If a vague conjugate prior is used and $19$ events are witnessed in a sample of size $n=20$, the Bayesian posterior becomes $\text{Beta}(a+19, b+20-19)$. This produces a posterior mean point estimate of $\frac{a+19}{a+19+b+20-19}$. Below are two posterior density estimates with 95% credible intervals based on vague conjugate priors, one with $a=1$ and $b=1$, and another with $a=0.1$ and $b=0.1$. Also plotted are confidence curves, one-sided p-values calculated using the cumulative distribution function for $\sum X_i\sim$ $\text{Bin}(n=20,\theta)$, as well as the resulting 95% confidence interval. The Bayesian and frequentist point and interval estimates are similar but different. I suspect this is due at least in part to the parameter space being continuous and the sample space for $\sum X_i$ being discrete. In terms of a willingness to bet, the Bayesian bets according to his beliefs while the frequentist bets according to the long-run probability of his testing procedure. This betting is best imagined in terms of a betting market. The question becomes, what would the market decide on, personal belief or long-run performance? Most gamblers would agree that long-run performance is the best bet. If there is no relevant historical data the frequentist would be willing to bet $\$0.95$ and expect $\$1$ in return if his $95\%$ confidence interval contains the true $\theta$ based on the long-run characteristics of the test above, whereas the Bayesian would be willing to bet more than $\$0.95$ for the same interval based on his beliefs that all $\theta$'s were equally likely (or concentrated near 0 and 1) until $19$ events were witnessed in a sample of $n=20$. The frequentist would gladly "buy this bet in the market" at $\$0.95$ and "sell it" (play the bookie) to the Bayesian to make a risk-free profit regarless of whether the frequentist or Bayesian interval covers the true $\theta$. To the frequentist at no point was $\theta$ randomly selected from a $\text{Beta}(a, b)$ distribution and then imagined instead to have been seleted from a $\text{Beta}(a+\sum x_i, b+n-\sum x_i)$. The Bayesian prior represents subjective belief. It can also be used to incorporate historical data. Under the frequentist paradigm, historical data can be incorporated via a fixed-effect meta-analysis. (1)
Examples of Bayesian and frequentist approach giving different answers
The answer provided by Christoph Hanck compares a Bayesian prediction interval for a future experimental result with a frequentist point estimate of a population-level parameter. A more appropriate c
Examples of Bayesian and frequentist approach giving different answers The answer provided by Christoph Hanck compares a Bayesian prediction interval for a future experimental result with a frequentist point estimate of a population-level parameter. A more appropriate comparison would be to compare a Bayesian prediction interval with a frequentist prediction interval. In my examples below I compare Bayesian posterior intervals with frequentist confidence intervals. Ultimately, the choice of using a Bayesian or frequentist approach comes down to how you choose to define probability. See my post here on interpretation and why one would choose a frequentist interpretation, Bayesian vs frequentist interpretations of probability. The following is taken from my manuscript on confidence distributions - Johnson, Geoffrey S. "Decision Making in Drug Development via Confidence Distributions" Researchgate.net (2021). In short, objective Bayesian and frequentist inference will differ the most when the data distribution is skewed, the sample size is small, and inference is performed near the boundary of the parameter space. Below I will begin with an example where Bayesian and frequentist inference agree perfectly. I will then provide another example where they differ. Under $H_0$: $\theta=\theta_0$ the likelihood ratio test statistic -2log$\lambda(\boldsymbol{X},\theta_0)$ follows an asymptotic $\chi^2_1$ distribution (Wilks 1938). If an upper-tailed test is inverted for all values of $\theta$ in the parameter space, the resulting distribution function of one-sided p-values is called a confidence distribution function. That is, the one-sided p-value testing $H_0$: $\theta\le\theta_0$, \begin{eqnarray}\label{eq} H(\theta_0,\boldsymbol{x})= \left\{ \begin{array}{cc} \big[1-F_{\chi^2_1}\big(-2\text{log}\lambda(\boldsymbol{x},\theta_0)\big)\big]/2 & \text{if } \theta_0 \le \hat{\theta}_{mle} \\ & \\ \big[1+F_{\chi^2_1}\big(-2\text{log}\lambda(\boldsymbol{x},\theta_0)\big)\big]/2 & \text{if } \theta_0 > \hat{\theta}_{mle}, \end{array} \right. \end{eqnarray} as a function of $\theta_0$ and the observed data $\boldsymbol{x}$ is the corresponding confidence distribution function, where $\hat{\theta}_{mle}$ is the maximum likelihood estimate of $\theta$ and $F_{\chi^2_1}(\cdot)$ is the cumulative distribution function of a $\chi^2_1$ random variable. Typically the naught subscript is dropped and $\boldsymbol{x}$ is suppressed to emphasize that $H(\theta)$ is a function over the entire parameter space. This recipe of viewing the p-value as a function of $\theta$ given the data produces a confidence distribution function for any hypothesis test. The confidence distribution can also be depicted by its density defined as $h(\theta)=dH(\theta)/d\theta$. Consider the setting where $X_1,...,X_n\sim\text{Exp}(\theta)$ with likelihood function $L(\theta)=\theta^{-n} e^{-\sum{x_i}/\theta}$. Then $supL(\theta)$ yields $\hat{\theta}_{mle}=\bar{x}$ as the maximum likelihood estimate for $\theta$, the likelihood ratio test statistic is $-2\text{log}\lambda({\boldsymbol{x},\theta_0})\equiv-2\text{log}\big(L({\theta}_0)/L(\hat{\theta}_{mle}) \big)$, and the corresponding confidence distribution function is defined as above. The histogram above, supported by $\bar{x}$, depicts the plug-in estimated sampling distribution for the maximum likelihood estimator (MLE) of the mean for exponentially distributed data with $n=5$ and $\hat{\theta}_{mle}=1.5$. Replacing the unknown fixed true $\theta$ with $\hat{\theta}_{mle}=1.5$, this displays the estimated sampling behavior of the MLE for all other replicated experiments, a $\text{Gamma}(5,0.3)$ distribution. The Bayesian posterior depicted by the thin blue curve resulting from a vague conjugate prior or an improper 1/θ prior is a transformation of the likelihood and is supported on the parameter space, an $\text{Inverse Gamma}(5,7.5)$ distribution. The bold black curve is also data dependent and supported on the parameter space, but represents confidence intervals of all levels from inverting the likelihood ratio test. It is a transformation of the sampling behavior of the test statistic under the null onto the parameter space, a ``distribution" of p-values. Each value of $\theta$ takes its turn playing the role of null hypothesis and hypothesis testing (akin to proof by contradiction) is used to infer the unknown fixed true $\theta$. The area under this curve to the right of the reference line is the p-value or significance level when testing the hypothesis $H_0$: $\theta \ge 2.35$. This probability forms the level of confidence that $\theta$ is greater than or equal to 2.35. Similarly, the area to the left of the reference line is the p-value when testing the hypothesis $H_0$: $\theta \le 2.35$. One can also identify the two-sided equal-tailed $100(1-\alpha)\%$ confidence interval by finding the complement of those values of $\theta$ in each tail with $\alpha$/2 significance. The dotted curve shows the exact likelihood ratio confidence density formed by noting that $\bar{X}\sim$ Gamma$(n,\theta/n)$ and inverting its cumulative distribution function. This confidence density agrees perfectly with the posterior distribution. A confidence density similar to that based on the likelihood ratio test can be produced by inverting a Wald test with a log link. When a normalized likelihood approaches a normal distribution with increasing sample size, Bayesian and frequentist inference are asymptotically equivalent. In the example above the posterior mean agrees with the maximum likelihood estimate. This is not always the case. Take, for example, estimation and inference on a non-linear monotonic transformation of $\theta$. For an example where Bayesian and frequentist inference differ, consider the setting where $X_1,...,X_n\sim\text{Bernoulli}(\theta)$ with likelihood function $L(\theta)=\theta^{\sum x_i}(1-\theta)^{n-\sum x_i}$. The conjugate Bayesian posterior is a $\text{Beta}(a+\sum x_i, b+n-\sum x_i)$ where $a$ and $b$ are the prior parameters. If a vague conjugate prior is used and $19$ events are witnessed in a sample of size $n=20$, the Bayesian posterior becomes $\text{Beta}(a+19, b+20-19)$. This produces a posterior mean point estimate of $\frac{a+19}{a+19+b+20-19}$. Below are two posterior density estimates with 95% credible intervals based on vague conjugate priors, one with $a=1$ and $b=1$, and another with $a=0.1$ and $b=0.1$. Also plotted are confidence curves, one-sided p-values calculated using the cumulative distribution function for $\sum X_i\sim$ $\text{Bin}(n=20,\theta)$, as well as the resulting 95% confidence interval. The Bayesian and frequentist point and interval estimates are similar but different. I suspect this is due at least in part to the parameter space being continuous and the sample space for $\sum X_i$ being discrete. In terms of a willingness to bet, the Bayesian bets according to his beliefs while the frequentist bets according to the long-run probability of his testing procedure. This betting is best imagined in terms of a betting market. The question becomes, what would the market decide on, personal belief or long-run performance? Most gamblers would agree that long-run performance is the best bet. If there is no relevant historical data the frequentist would be willing to bet $\$0.95$ and expect $\$1$ in return if his $95\%$ confidence interval contains the true $\theta$ based on the long-run characteristics of the test above, whereas the Bayesian would be willing to bet more than $\$0.95$ for the same interval based on his beliefs that all $\theta$'s were equally likely (or concentrated near 0 and 1) until $19$ events were witnessed in a sample of $n=20$. The frequentist would gladly "buy this bet in the market" at $\$0.95$ and "sell it" (play the bookie) to the Bayesian to make a risk-free profit regarless of whether the frequentist or Bayesian interval covers the true $\theta$. To the frequentist at no point was $\theta$ randomly selected from a $\text{Beta}(a, b)$ distribution and then imagined instead to have been seleted from a $\text{Beta}(a+\sum x_i, b+n-\sum x_i)$. The Bayesian prior represents subjective belief. It can also be used to incorporate historical data. Under the frequentist paradigm, historical data can be incorporated via a fixed-effect meta-analysis. (1)
Examples of Bayesian and frequentist approach giving different answers The answer provided by Christoph Hanck compares a Bayesian prediction interval for a future experimental result with a frequentist point estimate of a population-level parameter. A more appropriate c
3,743
Examples of Bayesian and frequentist approach giving different answers
A funny buth insightfull example is given by xkcd in https://xkcd.com/1132/: It stands for a whole group of problems where we have a strong prior and Frequentism neglects the prior. The Frequentist compares how likely the result is in the light of the null hypothesis but she does not consider whether the hypothesis is a priori even much more unlikely. So they both come to opposite conclusions.
Examples of Bayesian and frequentist approach giving different answers
A funny buth insightfull example is given by xkcd in https://xkcd.com/1132/: It stands for a whole group of problems where we have a strong prior and Frequentism neglects the prior. The Frequentist c
Examples of Bayesian and frequentist approach giving different answers A funny buth insightfull example is given by xkcd in https://xkcd.com/1132/: It stands for a whole group of problems where we have a strong prior and Frequentism neglects the prior. The Frequentist compares how likely the result is in the light of the null hypothesis but she does not consider whether the hypothesis is a priori even much more unlikely. So they both come to opposite conclusions.
Examples of Bayesian and frequentist approach giving different answers A funny buth insightfull example is given by xkcd in https://xkcd.com/1132/: It stands for a whole group of problems where we have a strong prior and Frequentism neglects the prior. The Frequentist c
3,744
Examples of Bayesian and frequentist approach giving different answers
The answer provided by Christoph Hanck compares a Bayesian predictive probability to a frequentist point estimate. Below are frequentist p-values for making a prediction (Johnson 2021a) as compared to a Bayesian posterior predictive distribution and a discussion on willingness to bet. Recall the problem statement, "A coin has been tossed $n=14$ times, coming up heads $k=10$ times. If it is to be tossed twice more, would you bet on two heads? Assume you do not get to see the result of the first toss before the second toss (and also independently conditional on $\theta$), so that you cannot update your opinion on $\theta$ in between the two throws." Let $\theta$ be the probability of heads and $r$ be the number of heads in the next two flips. To predict the result of two coin tosses the frequentist will calculate the p-value testing the hypothesis that the next two throws will be heads, $H_0: r=2$ or equivalently $H_0: \hat{\theta}_2=1$, using an ancillary quantity where $\hat{\theta}_2=r/2$ and $\hat{\theta}_{14}=k/14$. A candidate for such a quantity could involve $$\hat{\theta}_{14}-\hat{\theta}_2. $$ Based on 100,000 simulations the sampling distribution for this quantity is approximately ancillary, meaning it approximately does not depend on the unknown fixed true $\theta$. Near $\theta=10/14=0.71$ we could try approximating it with a bell curve. Since our observed estimate and hypothesized value yield $\hat{\theta}_{14}-\hat{\theta}_2 = 10/14 - 2/2 = -0.29$, the lower-tailed p-value using an approximate Wald test would be $$\Phi\Bigg(\frac{\hat{\theta}_{14}-\hat{\theta}_{2}}{\hat{\text{se}}}\Bigg)=0.20, $$ where $\hat{\text{se}}=\sqrt{\hat{\theta}_{14}(1-\hat{\theta}_{14})/14 + \hat{\theta}_{14}(1-\hat{\theta}_{14})/2}$. If we increase the number of bins in our histogram and look specifically at the sampling distribution of our statistic at $\theta=10/14$ we see that it may not look quite so bell shaped. Referencing this sampling distribution the lower-tailed probability of $\hat{\theta}_{14}-\hat{\theta}_{2}=-0.29$ or something more extreme is $0.31$, not far off from our crude Wald p-value. Below is a table of predictive p-values as a function of the hypothesis for $r$ being tested based on the above simulated sampling distribution, as well as posterior predictive belief "probabilities" based on a $\text{Beta}(1,1)$ prior, $k=10$, and $n=14$. There are in fact two conclusions the frequentist could draw based on the observed data and the approximate sampling distribution. Had we witnessed, say, $k=9$ or $k=11$ this ambiguity would not have occurred. Interestingly, the Bayesian finds himself in a similar predicament. $$H_0:\hat{\theta}_2=0\iff H_0: r=0$$ $$H_0:\hat{\theta}_2 = 0.5\iff H_0: r = 1$$ $$H_0:\hat{\theta}_2=1\iff H_0: r=2$$ Predictive p-value [smallest one-sided] 0.05 [upper] 0.34 [upper] 0.31 [lower] Confidence curve 1 [one-sided p-value] 0.05 [upper] 0.34$|$0.76 [upper$|$lower] (0.64) 0.31 [lower] Confidence curve 2 [one-sided p-value] 0.05 [upper] 0.34 [upper] 0.81 [upper] (0.66) Posterior predictive distribution 0.11 0.40 0.49 Conclusion 1 To the frequentist, $H_0: r=1$ or equivalently $H_0: \hat{\theta}_2=0.5$ could be regarded as the most plausible hypothesis because it has the largest tail probability of $0.34$ and the other two hypotheses have smaller p-values. This is consistent with the point estimate $\hat{\theta}_{14}=0.71$ since this estimate is closer to $H_0: \hat{\theta}_2=0.5$ than to $H_0: \hat{\theta}_2=1$. Not only can $\hat{\theta}_{14}=0.71$ be viewed as an estimate for the unknown fixed true $\theta$, it can also be viewed as an estimate for the as of yet unobserved $\hat{\theta}_2$. To the Bayesian, $r=2$ has the largest predictive posterior belief and is therefore most "probable" (plausible). This is contrary to the conclusion drawn by Christoph Hanck that the Bayesian would not bet on $r=2$ while the frequentist would. To the frequentist, since we can "rule out" $r=0$ at the $0.05$ level and $r=2$ at the $0.31$ level we are $64\%$ confident in the alternative which is $r=1$. This confidence level is nothing more than a restatement of the p-values for $r=0$ and $r=2$, $100(1-0.05-0.31)\%$. In the long run $\hspace{0mm}64\%$ of the time we see a discrepancy like $\hat{\theta}_{14}-\hat{\theta}_2=10/14-1/2=0.21$ [i.e. within (-0.29, 0.71)] regardless of the unknown fixed true $\theta$ (approximately). These statements of confidence are presented in the table above as "Confidence curve 1." Based on the observed data the frequentist would be most surprised by a result of $r=0$ followed by $r=2$, and least surprised by $r=1$. In terms of a willingness to bet the frequentist would bet $\$0.31$ and expect $\$1$ in return if $r=2$ based on the long-run characteristics of the test above, whereas the Bayesian would be willing to bet $\$0.49$ based on his beliefs that all $\theta$'s were equally likely until the coin came up heads $k=10$ times in $n=14$ throws. The frequentist would gladly "buy this bet in the market" at $\$0.31$ and "sell it" (play the bookie) to the Bayesian at $\$0.49$ making a risk-free $\$0.49-\$0.31=\$0.18$ regardless of whether the coin actually lands twice on heads (Dutch book). To the frequentist at no point was the coin randomly selected from a uniform distribution and then imagined instead to have been selected from a $\text{Beta}(1+10,1+14-10)$. Conclusion 2 Since the upper-tailed p-value testing $H_0:r= 1$ is smaller than the sum of the p-values testing $H_0:r=0$ and $H_0:r=2$, the frequentist could conclude that $H_0:r=2$ is the most plausible hypothesis. That is, since we can rule out $H_0:r=1$ (and by extension $H_0:r=0$) at the one-sided $0.34$ level we are therefore $66\%$ confident in the alternative which is $r=2$. This is shown in Confidence curve 2 (note these p-values are not expected to sum to 1). In the long run $66\%$ of the time we see a discrepancy like $\hat{\theta}_{14}-\hat{\theta}_2=10/14-2/2=-0.29$ [i.e. greater than 0.21] regardless of the unknown fixed true $\theta$ (approximately). While this does not coincide with the point estimate $\hat{\theta}_{14}=0.71$ being closest to $H_0:\hat{\theta}_2=0.5$, it does coincide with the probability mass of the sampling distribution being concentrated most heavily around $\hat{\theta}_{14}-\hat{\theta}_2$ $=10/14-2/2$ $=-0.29$. Based on this conclusion the frequentist would be most surprised by a result of $r \le 1$ and least surprised by $r=2$. To the Bayesian, $r=0$ and $r=1$ have a combined posterior predictive belief of $0.51$, and therefore $r\ne 2$ is most "probable" (plausible). In terms of a willingness to bet the frequentist would bet $\$0.66$ and expect $\$1$ in return if $r=2$ based on the long-run characteristics of the test above, whereas the Bayesian would be willing to bet $\$0.49$ based on his beliefs that all $\theta$'s were equally likely until the coin came up heads $k=10$ times in $n=14$ throws. The frequentist would gladly "buy the other side in the market" at $\$1-\$0.66=\$0.34$ and "sell it" (play the bookie) to the Bayesian at $\$1-\$0.49=\$0.51$ making a risk-free $\$0.51-\$0.34=\$0.17$ regardless of whether the coin actually lands twice on heads (Dutch book). The second conclusion could be deemed the most appropriate since it results in a higher confidence level for the favored outcome. This interpretation utilized evidential p-values under a Fisherian framework. Under a Neyman-Pearson framework one would pre-specify a single strawman null hypothesis or default prediction, say, $H_0:r=2$ and pre-specify an arbitrary type I error rate, say, $\alpha=0.35$. Based on the results above, this hypothesis would be rejected in favor of the alternative, $H_a:r\ne 2$. If, however, one had pre-specified a lower type I error rate of, say, $\alpha=0.20$ the strawman default prediction of $H_0:r=2$ would have been retained. This approach is useful if there are no consequences when predicting $H_0:r=2$ and getting it wrong, and dire consequences when predicting $H_a:r\ne 2$ and getting it wrong. Below are confidence curves for $\theta$ from inverting an exact likelihood ratio test based on our sample of size $n=14$, which produces a 95% confidence interval for $\theta$ of (0.42, 0.92). Since our quantity $\hat{\theta}_{14}-\hat{\theta}_2$ for predicting $r$ is not exactly ancillary we could perform a number of sensitivity analyses to see how the p-value changes depending on the unknown fixed true $\theta$. The above analysis and suggested sensitivity analysis are less than ideal since the quantity we chose is not exactly ancillary (the sampling distribution is not exactly the same for any value of $\theta$). We could look for another quantity that is exactly ancillary or a closer approximation to ancillary. Many Bayesians would presume as Christoph Hanck did that the frequentist is forced to assume that $\theta$ is known to be exactly equal to $10/14$ and reference the above binomial sampling distribution for $r$ when making predictions. This might seem at odds with our prediction above. After all, our best estimate for $\theta$ is $\hat{\theta}_{14}=0.71$ and therefore our best estimate for the probability of observing two heads, $\theta^2$, is $\hat{\theta}_{14}^2=0.51$. However, by chance alone we could have easily observed $9$ out of $14$ heads leading to an estimate of $\hat{\theta}_{14}^2=0.41$. The confidence curves below from inverting an exact likelihood ratio test account for the sampling variability of our estimator with a $64\%$ confidence interval matching the level of confidence in our prediction for $r$. Based on the long-run probability of our experiment we are $5\%$ confident that $\theta^2$ is less than or equal to $0.21$, $31\%$ confident that $\theta^2$ is greater than or equal to $0.64$, and therefore $64\%$ confident that $\theta^2$ lies within $(0.21, 0.64)$. If $\theta^2$ is truly $0.21$ then the probability that $r=1$ is $0.50$, which is in line with conclusion 1. However, if $\theta^2$ is near $0.64$ then $r=2$ has the highest probability, in line with conclusion 2. Performing inference on $\theta^2$ has the benefit of exact inference on a continuous hypothesis space compared to forming a prediction for $r$. It also formulates the solution in terms of a direct probability statement about $r$ while accounting for the uncertainty around having estimated $\theta$. This may make it easier for non-statisticians to interpret and understand (Johnson 2021 b). Nevertheless, this solution though exact still leaves ambiguity as to which bet to take. The real solution to the problem might simply be that we do not have enough information to reliably predict $r$, whether constructing predictive p-values or performing inference on $\theta^2$. For more information we would need to increase our sample size $n$ (Johnson 2021 c). Although the Bayesian and frequentist results differ numerically, a similar conclusion is reached under both paradigms since a Bayesian belief of $0.50$ is interpreted as "undecided." While the Bayesian can reliably model his beliefs for any sample size, beliefs are not facts. If our prediction is not based on facts, it will not be reliable. No one is ever interested in how a prophet believes in his prediction. They only ever care about how often he gets it right. %let k=10; %let n=14; data sim; *do theta=0.1, 0.25, 0.5, 0.75, 0.9; do theta=round(&k./&n.,0.01); do sim=1 to 100000; y14=rand('binomial',theta,&n.); y2=rand('binomial',theta,2); *T=log( ((y2/2)/(1-y2/2)) / ((y14/&n.)/(1-y14/&n.)) ); *T=log( (y2/2)/(y14/&n.) ); *T=(y2/2)/(y14/&n.); T=y14/&n.-y2/2; output; end; end; run; %let Wald_r0=; %let Wald_r1=; %let Wald_r2=; data pvalue; do r=0 to 2 by 1; theta14=&k./&n.; theta2=r/2; t=theta14-theta2; se=sqrt( theta14*(1-theta14)/&n. + theta14*(1-theta14)/2 ); Wald_pvalue=min(cdf('normal',t/se ,0 ,1), 1-cdf('normal',t/se ,0 ,1)); if r=2 then call symput('Wald_r2',Wald_pvalue); if r=1 then call symput('Wald_r1',Wald_pvalue); if r=0 then call symput('Wald_r0', Wald_pvalue); output; end; run; data pvalue; set pvalue; if r=2 then Wald_confidence=1-&Wald_r0.-&Wald_r1.; if r=1 then Wald_confidence=1-&Wald_r0.-&Wald_r2.; if r=0 then Wald_confidence=1-&Wald_r2.-&Wald_r1.; max=max(&Wald_r0.,&Wald_r1.,&Wald_r2.); if round(Wald_pvalue,0.001) ne round(max,0.001) then Wald_confidence=Wald_pvalue; keep r Wald_pvalue Wald_confidence; run; proc print data=pvalue noobs; var r Wald_pvalue Wald_confidence; footnote 'One-sided Wald p-values'; run; footnote; ods escapechar="^"; ods graphics / height=3in width=6in border=no; proc sgpanel data=sim; panelby theta / onepanel rows=1; histogram T ;*/ nbins=6; *density T / type=normal; refline %sysevalf(&k./&n.-1) / axis=x lineattrs=(color=darkblue thickness=2); refline %sysevalf(&k./&n.-0) / axis=x lineattrs=(color=darkblue thickness=2); refline %sysevalf(&k./&n.-0.5) / axis=x lineattrs=(color=darkblue thickness=2); colaxis label="^{unicode hat}^{unicode theta}14 - ^{unicode hat}^{unicode theta}2"; run; proc sort data=sim out=sort; by T; run; ods trace on; ods select none; proc univariate data=sim pctldef=3; var t; cdf t; ods output cdfplot=cdf; run; ods select all; data cdf; set cdf; ecdfx=round(ecdfx,0.0001); run; data cdf; set cdf; by ecdfx; if not last.ecdfx then delete; complementary_cdf=100-ecdfy; complementary_cdf=lag(complementary_cdf); if complementary_cdf=. then complementary_cdf=100; if ecdfx=round(&k./&n.-1,0.0001) then r=2; if ecdfx=round(&k./&n.-0.5,0.0001) then r=1; if ecdfx=round(&k./&n.-0,0.0001) then r=0; if r ne . and ecdfy lt complementary_cdf then refline1=ecdfx; if r ne . and complementary_cdf lt ecdfy then refline2=ecdfx; run; proc sgplot data=cdf; refline refline1/ axis=x; step x=ecdfx y=ecdfy / markers markerattrs=(symbol=circlefilled); title 'p-value from simulated sampling distribution'; xaxis grid minor minorgrid label="^{unicode hat}^{unicode theta}14-^{unicode hat}^{unicode theta}2"; yaxis grid minor minorgrid label="Cumulative distribution function"; run; title; footnote; proc sgplot data=cdf; refline refline2 / axis=x; step x=ecdfx y=complementary_cdf / markers markerattrs=(symbol=circlefilled); title 'p-value from simulated sampling distribution'; xaxis grid minor minorgrid label="^{unicode hat}^{unicode theta}14-^{unicode hat}^{unicode theta}2"; yaxis grid minor minorgrid label="Complementary CDF"; run; title; footnote; proc sort data=cdf out=cdf_pvalue (where=(r ne .)); by r; run; data cdf_pvalue; set cdf_pvalue; pvalue=min(ecdfy, complementary_cdf); if r=2 then call symput('r2', pvalue); if r=1 then call symput('r1', pvalue); if r=0 then call symput('r0', pvalue); run; data cdf_pvalue; set cdf_pvalue; p0=&r0.; p1=&r1.; p2=&r2.; if r=2 then confidence=100-p0-p1; if r=1 then confidence=100-p0-p2; if r=0 then confidence=100-p2-p1; max=max(p0, p1, p2); if round(pvalue,0.001) ne round(max,0.001) then confidence=pvalue; pvalue2=pvalue/100; confidence2=confidence/100; *keep r pvalue2 confidence2; rename pvalue2=pvalue confidence2=confidence; run; proc print data=cdf_pvalue noobs; var r pvalue confidence; footnote 'One-sided p-values from simulated sampling distribution'; run; footnote; data bayes; do sim=1 to 100000; theta=rand('beta',1+&k.,1+&n.-&k.); r=rand('binomail',theta,2); output; end; run; ods select none; proc univariate data=bayes; var r; cdf r; ods output cdfplot=cdf_bayes; run; ods select all; ods noproctitle; proc freq data=bayes; table r / nocum; footnote 'Bayesian predictive distribution'; run; footnote;
Examples of Bayesian and frequentist approach giving different answers
The answer provided by Christoph Hanck compares a Bayesian predictive probability to a frequentist point estimate. Below are frequentist p-values for making a prediction (Johnson 2021a) as compared t
Examples of Bayesian and frequentist approach giving different answers The answer provided by Christoph Hanck compares a Bayesian predictive probability to a frequentist point estimate. Below are frequentist p-values for making a prediction (Johnson 2021a) as compared to a Bayesian posterior predictive distribution and a discussion on willingness to bet. Recall the problem statement, "A coin has been tossed $n=14$ times, coming up heads $k=10$ times. If it is to be tossed twice more, would you bet on two heads? Assume you do not get to see the result of the first toss before the second toss (and also independently conditional on $\theta$), so that you cannot update your opinion on $\theta$ in between the two throws." Let $\theta$ be the probability of heads and $r$ be the number of heads in the next two flips. To predict the result of two coin tosses the frequentist will calculate the p-value testing the hypothesis that the next two throws will be heads, $H_0: r=2$ or equivalently $H_0: \hat{\theta}_2=1$, using an ancillary quantity where $\hat{\theta}_2=r/2$ and $\hat{\theta}_{14}=k/14$. A candidate for such a quantity could involve $$\hat{\theta}_{14}-\hat{\theta}_2. $$ Based on 100,000 simulations the sampling distribution for this quantity is approximately ancillary, meaning it approximately does not depend on the unknown fixed true $\theta$. Near $\theta=10/14=0.71$ we could try approximating it with a bell curve. Since our observed estimate and hypothesized value yield $\hat{\theta}_{14}-\hat{\theta}_2 = 10/14 - 2/2 = -0.29$, the lower-tailed p-value using an approximate Wald test would be $$\Phi\Bigg(\frac{\hat{\theta}_{14}-\hat{\theta}_{2}}{\hat{\text{se}}}\Bigg)=0.20, $$ where $\hat{\text{se}}=\sqrt{\hat{\theta}_{14}(1-\hat{\theta}_{14})/14 + \hat{\theta}_{14}(1-\hat{\theta}_{14})/2}$. If we increase the number of bins in our histogram and look specifically at the sampling distribution of our statistic at $\theta=10/14$ we see that it may not look quite so bell shaped. Referencing this sampling distribution the lower-tailed probability of $\hat{\theta}_{14}-\hat{\theta}_{2}=-0.29$ or something more extreme is $0.31$, not far off from our crude Wald p-value. Below is a table of predictive p-values as a function of the hypothesis for $r$ being tested based on the above simulated sampling distribution, as well as posterior predictive belief "probabilities" based on a $\text{Beta}(1,1)$ prior, $k=10$, and $n=14$. There are in fact two conclusions the frequentist could draw based on the observed data and the approximate sampling distribution. Had we witnessed, say, $k=9$ or $k=11$ this ambiguity would not have occurred. Interestingly, the Bayesian finds himself in a similar predicament. $$H_0:\hat{\theta}_2=0\iff H_0: r=0$$ $$H_0:\hat{\theta}_2 = 0.5\iff H_0: r = 1$$ $$H_0:\hat{\theta}_2=1\iff H_0: r=2$$ Predictive p-value [smallest one-sided] 0.05 [upper] 0.34 [upper] 0.31 [lower] Confidence curve 1 [one-sided p-value] 0.05 [upper] 0.34$|$0.76 [upper$|$lower] (0.64) 0.31 [lower] Confidence curve 2 [one-sided p-value] 0.05 [upper] 0.34 [upper] 0.81 [upper] (0.66) Posterior predictive distribution 0.11 0.40 0.49 Conclusion 1 To the frequentist, $H_0: r=1$ or equivalently $H_0: \hat{\theta}_2=0.5$ could be regarded as the most plausible hypothesis because it has the largest tail probability of $0.34$ and the other two hypotheses have smaller p-values. This is consistent with the point estimate $\hat{\theta}_{14}=0.71$ since this estimate is closer to $H_0: \hat{\theta}_2=0.5$ than to $H_0: \hat{\theta}_2=1$. Not only can $\hat{\theta}_{14}=0.71$ be viewed as an estimate for the unknown fixed true $\theta$, it can also be viewed as an estimate for the as of yet unobserved $\hat{\theta}_2$. To the Bayesian, $r=2$ has the largest predictive posterior belief and is therefore most "probable" (plausible). This is contrary to the conclusion drawn by Christoph Hanck that the Bayesian would not bet on $r=2$ while the frequentist would. To the frequentist, since we can "rule out" $r=0$ at the $0.05$ level and $r=2$ at the $0.31$ level we are $64\%$ confident in the alternative which is $r=1$. This confidence level is nothing more than a restatement of the p-values for $r=0$ and $r=2$, $100(1-0.05-0.31)\%$. In the long run $\hspace{0mm}64\%$ of the time we see a discrepancy like $\hat{\theta}_{14}-\hat{\theta}_2=10/14-1/2=0.21$ [i.e. within (-0.29, 0.71)] regardless of the unknown fixed true $\theta$ (approximately). These statements of confidence are presented in the table above as "Confidence curve 1." Based on the observed data the frequentist would be most surprised by a result of $r=0$ followed by $r=2$, and least surprised by $r=1$. In terms of a willingness to bet the frequentist would bet $\$0.31$ and expect $\$1$ in return if $r=2$ based on the long-run characteristics of the test above, whereas the Bayesian would be willing to bet $\$0.49$ based on his beliefs that all $\theta$'s were equally likely until the coin came up heads $k=10$ times in $n=14$ throws. The frequentist would gladly "buy this bet in the market" at $\$0.31$ and "sell it" (play the bookie) to the Bayesian at $\$0.49$ making a risk-free $\$0.49-\$0.31=\$0.18$ regardless of whether the coin actually lands twice on heads (Dutch book). To the frequentist at no point was the coin randomly selected from a uniform distribution and then imagined instead to have been selected from a $\text{Beta}(1+10,1+14-10)$. Conclusion 2 Since the upper-tailed p-value testing $H_0:r= 1$ is smaller than the sum of the p-values testing $H_0:r=0$ and $H_0:r=2$, the frequentist could conclude that $H_0:r=2$ is the most plausible hypothesis. That is, since we can rule out $H_0:r=1$ (and by extension $H_0:r=0$) at the one-sided $0.34$ level we are therefore $66\%$ confident in the alternative which is $r=2$. This is shown in Confidence curve 2 (note these p-values are not expected to sum to 1). In the long run $66\%$ of the time we see a discrepancy like $\hat{\theta}_{14}-\hat{\theta}_2=10/14-2/2=-0.29$ [i.e. greater than 0.21] regardless of the unknown fixed true $\theta$ (approximately). While this does not coincide with the point estimate $\hat{\theta}_{14}=0.71$ being closest to $H_0:\hat{\theta}_2=0.5$, it does coincide with the probability mass of the sampling distribution being concentrated most heavily around $\hat{\theta}_{14}-\hat{\theta}_2$ $=10/14-2/2$ $=-0.29$. Based on this conclusion the frequentist would be most surprised by a result of $r \le 1$ and least surprised by $r=2$. To the Bayesian, $r=0$ and $r=1$ have a combined posterior predictive belief of $0.51$, and therefore $r\ne 2$ is most "probable" (plausible). In terms of a willingness to bet the frequentist would bet $\$0.66$ and expect $\$1$ in return if $r=2$ based on the long-run characteristics of the test above, whereas the Bayesian would be willing to bet $\$0.49$ based on his beliefs that all $\theta$'s were equally likely until the coin came up heads $k=10$ times in $n=14$ throws. The frequentist would gladly "buy the other side in the market" at $\$1-\$0.66=\$0.34$ and "sell it" (play the bookie) to the Bayesian at $\$1-\$0.49=\$0.51$ making a risk-free $\$0.51-\$0.34=\$0.17$ regardless of whether the coin actually lands twice on heads (Dutch book). The second conclusion could be deemed the most appropriate since it results in a higher confidence level for the favored outcome. This interpretation utilized evidential p-values under a Fisherian framework. Under a Neyman-Pearson framework one would pre-specify a single strawman null hypothesis or default prediction, say, $H_0:r=2$ and pre-specify an arbitrary type I error rate, say, $\alpha=0.35$. Based on the results above, this hypothesis would be rejected in favor of the alternative, $H_a:r\ne 2$. If, however, one had pre-specified a lower type I error rate of, say, $\alpha=0.20$ the strawman default prediction of $H_0:r=2$ would have been retained. This approach is useful if there are no consequences when predicting $H_0:r=2$ and getting it wrong, and dire consequences when predicting $H_a:r\ne 2$ and getting it wrong. Below are confidence curves for $\theta$ from inverting an exact likelihood ratio test based on our sample of size $n=14$, which produces a 95% confidence interval for $\theta$ of (0.42, 0.92). Since our quantity $\hat{\theta}_{14}-\hat{\theta}_2$ for predicting $r$ is not exactly ancillary we could perform a number of sensitivity analyses to see how the p-value changes depending on the unknown fixed true $\theta$. The above analysis and suggested sensitivity analysis are less than ideal since the quantity we chose is not exactly ancillary (the sampling distribution is not exactly the same for any value of $\theta$). We could look for another quantity that is exactly ancillary or a closer approximation to ancillary. Many Bayesians would presume as Christoph Hanck did that the frequentist is forced to assume that $\theta$ is known to be exactly equal to $10/14$ and reference the above binomial sampling distribution for $r$ when making predictions. This might seem at odds with our prediction above. After all, our best estimate for $\theta$ is $\hat{\theta}_{14}=0.71$ and therefore our best estimate for the probability of observing two heads, $\theta^2$, is $\hat{\theta}_{14}^2=0.51$. However, by chance alone we could have easily observed $9$ out of $14$ heads leading to an estimate of $\hat{\theta}_{14}^2=0.41$. The confidence curves below from inverting an exact likelihood ratio test account for the sampling variability of our estimator with a $64\%$ confidence interval matching the level of confidence in our prediction for $r$. Based on the long-run probability of our experiment we are $5\%$ confident that $\theta^2$ is less than or equal to $0.21$, $31\%$ confident that $\theta^2$ is greater than or equal to $0.64$, and therefore $64\%$ confident that $\theta^2$ lies within $(0.21, 0.64)$. If $\theta^2$ is truly $0.21$ then the probability that $r=1$ is $0.50$, which is in line with conclusion 1. However, if $\theta^2$ is near $0.64$ then $r=2$ has the highest probability, in line with conclusion 2. Performing inference on $\theta^2$ has the benefit of exact inference on a continuous hypothesis space compared to forming a prediction for $r$. It also formulates the solution in terms of a direct probability statement about $r$ while accounting for the uncertainty around having estimated $\theta$. This may make it easier for non-statisticians to interpret and understand (Johnson 2021 b). Nevertheless, this solution though exact still leaves ambiguity as to which bet to take. The real solution to the problem might simply be that we do not have enough information to reliably predict $r$, whether constructing predictive p-values or performing inference on $\theta^2$. For more information we would need to increase our sample size $n$ (Johnson 2021 c). Although the Bayesian and frequentist results differ numerically, a similar conclusion is reached under both paradigms since a Bayesian belief of $0.50$ is interpreted as "undecided." While the Bayesian can reliably model his beliefs for any sample size, beliefs are not facts. If our prediction is not based on facts, it will not be reliable. No one is ever interested in how a prophet believes in his prediction. They only ever care about how often he gets it right. %let k=10; %let n=14; data sim; *do theta=0.1, 0.25, 0.5, 0.75, 0.9; do theta=round(&k./&n.,0.01); do sim=1 to 100000; y14=rand('binomial',theta,&n.); y2=rand('binomial',theta,2); *T=log( ((y2/2)/(1-y2/2)) / ((y14/&n.)/(1-y14/&n.)) ); *T=log( (y2/2)/(y14/&n.) ); *T=(y2/2)/(y14/&n.); T=y14/&n.-y2/2; output; end; end; run; %let Wald_r0=; %let Wald_r1=; %let Wald_r2=; data pvalue; do r=0 to 2 by 1; theta14=&k./&n.; theta2=r/2; t=theta14-theta2; se=sqrt( theta14*(1-theta14)/&n. + theta14*(1-theta14)/2 ); Wald_pvalue=min(cdf('normal',t/se ,0 ,1), 1-cdf('normal',t/se ,0 ,1)); if r=2 then call symput('Wald_r2',Wald_pvalue); if r=1 then call symput('Wald_r1',Wald_pvalue); if r=0 then call symput('Wald_r0', Wald_pvalue); output; end; run; data pvalue; set pvalue; if r=2 then Wald_confidence=1-&Wald_r0.-&Wald_r1.; if r=1 then Wald_confidence=1-&Wald_r0.-&Wald_r2.; if r=0 then Wald_confidence=1-&Wald_r2.-&Wald_r1.; max=max(&Wald_r0.,&Wald_r1.,&Wald_r2.); if round(Wald_pvalue,0.001) ne round(max,0.001) then Wald_confidence=Wald_pvalue; keep r Wald_pvalue Wald_confidence; run; proc print data=pvalue noobs; var r Wald_pvalue Wald_confidence; footnote 'One-sided Wald p-values'; run; footnote; ods escapechar="^"; ods graphics / height=3in width=6in border=no; proc sgpanel data=sim; panelby theta / onepanel rows=1; histogram T ;*/ nbins=6; *density T / type=normal; refline %sysevalf(&k./&n.-1) / axis=x lineattrs=(color=darkblue thickness=2); refline %sysevalf(&k./&n.-0) / axis=x lineattrs=(color=darkblue thickness=2); refline %sysevalf(&k./&n.-0.5) / axis=x lineattrs=(color=darkblue thickness=2); colaxis label="^{unicode hat}^{unicode theta}14 - ^{unicode hat}^{unicode theta}2"; run; proc sort data=sim out=sort; by T; run; ods trace on; ods select none; proc univariate data=sim pctldef=3; var t; cdf t; ods output cdfplot=cdf; run; ods select all; data cdf; set cdf; ecdfx=round(ecdfx,0.0001); run; data cdf; set cdf; by ecdfx; if not last.ecdfx then delete; complementary_cdf=100-ecdfy; complementary_cdf=lag(complementary_cdf); if complementary_cdf=. then complementary_cdf=100; if ecdfx=round(&k./&n.-1,0.0001) then r=2; if ecdfx=round(&k./&n.-0.5,0.0001) then r=1; if ecdfx=round(&k./&n.-0,0.0001) then r=0; if r ne . and ecdfy lt complementary_cdf then refline1=ecdfx; if r ne . and complementary_cdf lt ecdfy then refline2=ecdfx; run; proc sgplot data=cdf; refline refline1/ axis=x; step x=ecdfx y=ecdfy / markers markerattrs=(symbol=circlefilled); title 'p-value from simulated sampling distribution'; xaxis grid minor minorgrid label="^{unicode hat}^{unicode theta}14-^{unicode hat}^{unicode theta}2"; yaxis grid minor minorgrid label="Cumulative distribution function"; run; title; footnote; proc sgplot data=cdf; refline refline2 / axis=x; step x=ecdfx y=complementary_cdf / markers markerattrs=(symbol=circlefilled); title 'p-value from simulated sampling distribution'; xaxis grid minor minorgrid label="^{unicode hat}^{unicode theta}14-^{unicode hat}^{unicode theta}2"; yaxis grid minor minorgrid label="Complementary CDF"; run; title; footnote; proc sort data=cdf out=cdf_pvalue (where=(r ne .)); by r; run; data cdf_pvalue; set cdf_pvalue; pvalue=min(ecdfy, complementary_cdf); if r=2 then call symput('r2', pvalue); if r=1 then call symput('r1', pvalue); if r=0 then call symput('r0', pvalue); run; data cdf_pvalue; set cdf_pvalue; p0=&r0.; p1=&r1.; p2=&r2.; if r=2 then confidence=100-p0-p1; if r=1 then confidence=100-p0-p2; if r=0 then confidence=100-p2-p1; max=max(p0, p1, p2); if round(pvalue,0.001) ne round(max,0.001) then confidence=pvalue; pvalue2=pvalue/100; confidence2=confidence/100; *keep r pvalue2 confidence2; rename pvalue2=pvalue confidence2=confidence; run; proc print data=cdf_pvalue noobs; var r pvalue confidence; footnote 'One-sided p-values from simulated sampling distribution'; run; footnote; data bayes; do sim=1 to 100000; theta=rand('beta',1+&k.,1+&n.-&k.); r=rand('binomail',theta,2); output; end; run; ods select none; proc univariate data=bayes; var r; cdf r; ods output cdfplot=cdf_bayes; run; ods select all; ods noproctitle; proc freq data=bayes; table r / nocum; footnote 'Bayesian predictive distribution'; run; footnote;
Examples of Bayesian and frequentist approach giving different answers The answer provided by Christoph Hanck compares a Bayesian predictive probability to a frequentist point estimate. Below are frequentist p-values for making a prediction (Johnson 2021a) as compared t
3,745
Examples of Bayesian and frequentist approach giving different answers
There are four sources of differences between Bayesian, Likelihoodist, Frequentist, and the various uncategorized miscellaneous methods, such as the method of moments, that I have been able to discover. The first has to with differences in defining the idea of an optimal solution. The second has to do with the deeper layers of the methods, such as the axiom structure. The third has to do with prior knowledge. The fourth has to do with the intent of the model. My daughter and I were home some very many years ago, and neither my wife nor sons were home. My guess is that they were involved in a band activity or maybe lacrosse. We had cake the day before, and when I opened the refrigerator, I realized that there was too much cake for one person but not enough cake for two people. I asked her if she wanted to split the last piece. I grabbed a knife to cut it, and she said, "can I cut the cake?" I said, "sure, but if you cut, then I get to choose the piece." That seemed most amenable to her little girl's mind, and I could see the wheels turning as she moved the knife around to work out how uneven she should cut the cake and not raise any protest. She finally made her cut, and I grabbed my piece of cake. A loud protest rang out, and I offered to put the pieces back together again to try and make the cut better. Her bewildered look was followed by an emphatic "no," from her. So I shrugged, walked off, and ate the cake leaving her with the small piece. It was then that she realized her father was related to Darth Vader. My daughter had used her prior knowledge of when I cut the cake and let her pick, had assigned a zero prior probability to my statement, and chose her fair cut. The likelihood, when combined with her posterior, changed her view of many parameters. This mentality of "I cut the cake, and you choose" is the basis of one of the axiomatizations of probability. I have just shown you its potential weakness. You should always assume a being of utter darkness is on the other side of the deal. My daughter now understands why I cast a shadow even when there is no direct source of light. Let us assume that I will let you set the gambling odds, but I will choose what combination of bets to take. Presumably, you will not accept a gamble where I am certain to win regardless of the outcome of the event being gambled on. You won't play, "heads, I win; tails, you lose." So we are going to play a cake-cutting game and then gamble on it. By gambling on it, we change the intention of the game. We also invoke a very particular axiom set that we can contrast with another set. In addition, we can ponder the idea of optimality. Tangentially, we can discuss the small impact prior knowledge has. By making you the financial intermediary, I am giving you control of the "bid" and the "ask," or alternatively, the vig, using bookies' terminology. To make the game fair, we agree that I will pay a flat fee per cut cake. In return, I can bet any finite sum. I can either go long or short on any gamble as well. We will assume that I have adequate collateral to cover any offered short bet. Also, to make the game fair, you agree to use the risk-neutral measure as you are collecting a flat fee. In other words, you will grant fair odds. The reason for the flat fee is that lump sums are constants, and so their derivative is zero. The fee does not play a factor in profit optimization. You have two tasks. The first is to cut the cake, the second to give prices for lotteries. Either the left side or the right side of the cake will be larger. In the simple games of economics without uncertainty, you would want the cake to be cut as follows in the illustration. You want it to be cut in such a way that you cannot distinguish which side is bigger. Unfortunately, this is not a game of perfect knowledge. We need to play a statistical game that has uncertainty in it, or it isn't any fun. To add uncertainty, we will have a robot put the cake in a darkened room so that neither can see the cake's location. A point on the table will be chosen by a random number generator. The generator will produce uniformly distributed points. A robot will enter the room and place the center of this perfectly circular cake on the randomly chosen point. A sensor will detect points from the cake in a uniformly distributed manner. In other words, both you and I will be provided with forty sets of points. Each point on the cake is equiprobable to be detected by the sensor. You will then cut the cake through the minimum variance unbiased estimator (MVUE) of the center using an ultrapowerful laser cutter. Because the MVUE is guaranteed by force of math to be perfectly accurate, half the time the left side will be larger, as the sample size tends to infinity. If you granted even odds, and I always bet on the left side, one would expect a binomial distribution of outcomes over thirty cakes to look thusly. But would betting on left every time be a rational decision on my part? After all, I have data. I could form a posterior distribution for the parameter and do another estimate. Since I have a computer, I could do it before you cut the cake, based only on our shared forty points. What if my Bayesian estimate didn't agree with your Frequentist estimate? Now, this is where a bunch of mathematical tricks comes in. The intent is to gamble, which changes the outcome. If we were not gambling, although there would be differences, they wouldn't really matter. Indeed, all that would matter would be how we defined optimality upfront, determining which solution we should choose. Gambling is different because Frequentist statistics give rise to arbitrage opportunities at least in some percentage of the repetitions of a game. The issue has to do with a conflict between the Dutch Book Theorem and Kolmogorov's axioms. The appearance of this conflict varies from game to game but is always present. It sometimes takes a bit of digging to find the conflict, but it will be there as a pathology. There also has to be an opponent. Someone has to know that there is a mispricing of a lottery such as a stock, horse racing ticket, or options on wheat futures. The specific reason is that Kolmogorov's third axiom is that the union of partitions of sets yield countably additive measures. You can cut a continuous probability distribution into a countably infinite number of parts under Frequentist axioms. The Dutch Book Theorem says that you cannot do that. You can cut sets, but the number of sets must be finite. My objective function would be to win the largest number of gambles. It would not care if I used an unbiased method unless that maximizes wins. Now the prior does give me a slight edge. A cake near the border will, in part, sit off the table. Points over the edge cannot sit in a location that could also be the center of the cake. If all the observed points came from places over the edge, even though that would be an exceedingly rare combination of events, it would allow me to know that you have a mistake in your calculations. If the MVUE were sitting in midair instead of on the table, I would know your calculations were wrong. Nonetheless, we have forty data points. If we expanded the table enough, we could make that case vanishingly slight. Still, there is a problem. I have data from which I can construct a posterior density. Bayesian bets cannot be arbitraged under mild additional conditions. Lotteries priced with Frequentist tools can be arbitraged at least some of the time. The Frequentist cake cut would pass through the circle like this. Please note that the circle is an oval on some machines due to differences in devices and settings. However, the MVUE is not inside the posterior. You can know that because you can place a circle of equal radius around the MVUE, and you will get the following graph. Obviously, the MVUE sits in a physically impossible location. That permits anybody to know which side will be larger. Knowing the points is sufficient to detect the differences between the two types of models against the reality. The green lines map the Bayesian posterior. The black point minimizes quadratic loss, but each point in the posterior is an equally valid possible location for the center of the cake. I did a simulation putting a corner at (10,10) and (90,90), and I was guaranteed a win 48% of the time. Additionally, I won more than half the remaining gambles due to differences between the methods. For the MVUE to be guaranteed to be unbiased, it has to pay for that insurance policy with information. That difference in information results in a difference in precision. Because you are cutting at an angle from (0,0), you must be outside the joint Cartesian posterior and the marginal distribution of the posterior angle or slope. The MVUE can be closer to the true center even though it is outside the Cartesian posterior. If I used your cuts and your odds for thirty rounds, my expected value, assuming I bet 100% of my money every round I knew the correct answer and made the Kelly-optimal bet the remaining 52% of the time, would be to make 128,000 times my initial amount. Under the Frequentist model where I always bet left or a random location, I would expect a small loss due to the fee I was paying to play. I have a half dozen of these games related to asset pricing, trading, or options pricing. Real-world financial models are complicated because if you attempt to derive the Frequentist models in a Bayesian framework, you do not end up with models that look alike at all. It doesn't help, necessarily, to cut and paste a Bayesian procedure on an economic model built on top of Frequentist axioms. Nonetheless, you can learn quite a bit when you cut a cake with a child involved.
Examples of Bayesian and frequentist approach giving different answers
There are four sources of differences between Bayesian, Likelihoodist, Frequentist, and the various uncategorized miscellaneous methods, such as the method of moments, that I have been able to discove
Examples of Bayesian and frequentist approach giving different answers There are four sources of differences between Bayesian, Likelihoodist, Frequentist, and the various uncategorized miscellaneous methods, such as the method of moments, that I have been able to discover. The first has to with differences in defining the idea of an optimal solution. The second has to do with the deeper layers of the methods, such as the axiom structure. The third has to do with prior knowledge. The fourth has to do with the intent of the model. My daughter and I were home some very many years ago, and neither my wife nor sons were home. My guess is that they were involved in a band activity or maybe lacrosse. We had cake the day before, and when I opened the refrigerator, I realized that there was too much cake for one person but not enough cake for two people. I asked her if she wanted to split the last piece. I grabbed a knife to cut it, and she said, "can I cut the cake?" I said, "sure, but if you cut, then I get to choose the piece." That seemed most amenable to her little girl's mind, and I could see the wheels turning as she moved the knife around to work out how uneven she should cut the cake and not raise any protest. She finally made her cut, and I grabbed my piece of cake. A loud protest rang out, and I offered to put the pieces back together again to try and make the cut better. Her bewildered look was followed by an emphatic "no," from her. So I shrugged, walked off, and ate the cake leaving her with the small piece. It was then that she realized her father was related to Darth Vader. My daughter had used her prior knowledge of when I cut the cake and let her pick, had assigned a zero prior probability to my statement, and chose her fair cut. The likelihood, when combined with her posterior, changed her view of many parameters. This mentality of "I cut the cake, and you choose" is the basis of one of the axiomatizations of probability. I have just shown you its potential weakness. You should always assume a being of utter darkness is on the other side of the deal. My daughter now understands why I cast a shadow even when there is no direct source of light. Let us assume that I will let you set the gambling odds, but I will choose what combination of bets to take. Presumably, you will not accept a gamble where I am certain to win regardless of the outcome of the event being gambled on. You won't play, "heads, I win; tails, you lose." So we are going to play a cake-cutting game and then gamble on it. By gambling on it, we change the intention of the game. We also invoke a very particular axiom set that we can contrast with another set. In addition, we can ponder the idea of optimality. Tangentially, we can discuss the small impact prior knowledge has. By making you the financial intermediary, I am giving you control of the "bid" and the "ask," or alternatively, the vig, using bookies' terminology. To make the game fair, we agree that I will pay a flat fee per cut cake. In return, I can bet any finite sum. I can either go long or short on any gamble as well. We will assume that I have adequate collateral to cover any offered short bet. Also, to make the game fair, you agree to use the risk-neutral measure as you are collecting a flat fee. In other words, you will grant fair odds. The reason for the flat fee is that lump sums are constants, and so their derivative is zero. The fee does not play a factor in profit optimization. You have two tasks. The first is to cut the cake, the second to give prices for lotteries. Either the left side or the right side of the cake will be larger. In the simple games of economics without uncertainty, you would want the cake to be cut as follows in the illustration. You want it to be cut in such a way that you cannot distinguish which side is bigger. Unfortunately, this is not a game of perfect knowledge. We need to play a statistical game that has uncertainty in it, or it isn't any fun. To add uncertainty, we will have a robot put the cake in a darkened room so that neither can see the cake's location. A point on the table will be chosen by a random number generator. The generator will produce uniformly distributed points. A robot will enter the room and place the center of this perfectly circular cake on the randomly chosen point. A sensor will detect points from the cake in a uniformly distributed manner. In other words, both you and I will be provided with forty sets of points. Each point on the cake is equiprobable to be detected by the sensor. You will then cut the cake through the minimum variance unbiased estimator (MVUE) of the center using an ultrapowerful laser cutter. Because the MVUE is guaranteed by force of math to be perfectly accurate, half the time the left side will be larger, as the sample size tends to infinity. If you granted even odds, and I always bet on the left side, one would expect a binomial distribution of outcomes over thirty cakes to look thusly. But would betting on left every time be a rational decision on my part? After all, I have data. I could form a posterior distribution for the parameter and do another estimate. Since I have a computer, I could do it before you cut the cake, based only on our shared forty points. What if my Bayesian estimate didn't agree with your Frequentist estimate? Now, this is where a bunch of mathematical tricks comes in. The intent is to gamble, which changes the outcome. If we were not gambling, although there would be differences, they wouldn't really matter. Indeed, all that would matter would be how we defined optimality upfront, determining which solution we should choose. Gambling is different because Frequentist statistics give rise to arbitrage opportunities at least in some percentage of the repetitions of a game. The issue has to do with a conflict between the Dutch Book Theorem and Kolmogorov's axioms. The appearance of this conflict varies from game to game but is always present. It sometimes takes a bit of digging to find the conflict, but it will be there as a pathology. There also has to be an opponent. Someone has to know that there is a mispricing of a lottery such as a stock, horse racing ticket, or options on wheat futures. The specific reason is that Kolmogorov's third axiom is that the union of partitions of sets yield countably additive measures. You can cut a continuous probability distribution into a countably infinite number of parts under Frequentist axioms. The Dutch Book Theorem says that you cannot do that. You can cut sets, but the number of sets must be finite. My objective function would be to win the largest number of gambles. It would not care if I used an unbiased method unless that maximizes wins. Now the prior does give me a slight edge. A cake near the border will, in part, sit off the table. Points over the edge cannot sit in a location that could also be the center of the cake. If all the observed points came from places over the edge, even though that would be an exceedingly rare combination of events, it would allow me to know that you have a mistake in your calculations. If the MVUE were sitting in midair instead of on the table, I would know your calculations were wrong. Nonetheless, we have forty data points. If we expanded the table enough, we could make that case vanishingly slight. Still, there is a problem. I have data from which I can construct a posterior density. Bayesian bets cannot be arbitraged under mild additional conditions. Lotteries priced with Frequentist tools can be arbitraged at least some of the time. The Frequentist cake cut would pass through the circle like this. Please note that the circle is an oval on some machines due to differences in devices and settings. However, the MVUE is not inside the posterior. You can know that because you can place a circle of equal radius around the MVUE, and you will get the following graph. Obviously, the MVUE sits in a physically impossible location. That permits anybody to know which side will be larger. Knowing the points is sufficient to detect the differences between the two types of models against the reality. The green lines map the Bayesian posterior. The black point minimizes quadratic loss, but each point in the posterior is an equally valid possible location for the center of the cake. I did a simulation putting a corner at (10,10) and (90,90), and I was guaranteed a win 48% of the time. Additionally, I won more than half the remaining gambles due to differences between the methods. For the MVUE to be guaranteed to be unbiased, it has to pay for that insurance policy with information. That difference in information results in a difference in precision. Because you are cutting at an angle from (0,0), you must be outside the joint Cartesian posterior and the marginal distribution of the posterior angle or slope. The MVUE can be closer to the true center even though it is outside the Cartesian posterior. If I used your cuts and your odds for thirty rounds, my expected value, assuming I bet 100% of my money every round I knew the correct answer and made the Kelly-optimal bet the remaining 52% of the time, would be to make 128,000 times my initial amount. Under the Frequentist model where I always bet left or a random location, I would expect a small loss due to the fee I was paying to play. I have a half dozen of these games related to asset pricing, trading, or options pricing. Real-world financial models are complicated because if you attempt to derive the Frequentist models in a Bayesian framework, you do not end up with models that look alike at all. It doesn't help, necessarily, to cut and paste a Bayesian procedure on an economic model built on top of Frequentist axioms. Nonetheless, you can learn quite a bit when you cut a cake with a child involved.
Examples of Bayesian and frequentist approach giving different answers There are four sources of differences between Bayesian, Likelihoodist, Frequentist, and the various uncategorized miscellaneous methods, such as the method of moments, that I have been able to discove
3,746
Examples of Bayesian and frequentist approach giving different answers
A very obvious case where it makes a difference is when there is a relevant prior information, but we are trying to analyze a small dataset. An example that I found quite useful (and used in my thesis, full details of the analysis below are given there) is the TGN1412 first in human trial. During that first-in-human trial of a monoclonal antibody (i.e. TGN1412), all 6 of 6 the healthy volunteers in the test group ended up in a critical care unit due to cytokine storms, while this occurred for 0 of the 2 placebo subjects. Adverse event TGN1412 (N=6) Placebo (N=2) Patient in ICU due to Cytokine storm 6/6 (100%) 0/2 (0%) As Stephen Senn pointed out, Fisher's exact test results in a one-sided p-value of 0.0357 (i.e. above 0.025). Also, if you do an exponential time-to-event model using exact Poisson regression, you get a median unbiased estimate of 1.05 (95% CI -0.62 to $\infty$) for the log-hazard ratio with a two-sided p-value of 0.3308. So, two somewhat reasonable frequentist analyses would normally be interpreted as (ignoring that this analysis was not pre-specified etc.) having not enough data to reject the null hypothesis that the drug increases the likelihood of the cytokine storms. Nevertheless, when people talk about this trial, you will not here any doubt that the drug caused these adverse events. Why is that? Let's use historical discharges or deaths from intensive care units from around then (2001, the trial was in 2006) for the UK (both trial and historical data). That's probably an upper bound for the expected rate, because this is the general population including more frail individuals (rather than the young and healthy volunteers in the trial) and the ICU stays were for any case rather than for cytokine storms. From that I get a Gamma(1270614; 4808670538) prior for the control group exponential rate per patient year (probably reasonable to use an exponential distribution). If I take a Cauchy(0, 0.25) prior for the log-hazard ratio of TGN1412 vs. placebo, then I get a posterior median log-hazard ratio of 12.2 (95% CrI 11.3 to 13.0) with a posterior probability in excess of 99.999% that TGN1412 increased the hazard rate for the admission to critical care. I.e. the prior knowledge on the rarity of the specic adverse event (a cytokine storm requiring admission to an intensive care unit) in young and healthy individuals means that these adverse events have been attributed to the TGN1412, because we a-priori would expect to see zero such cases with very high probability in such a short small trial. The fact that we saw 6 cases in the TGN1412 out of 6 patients is so implausible unless the drug caused it, that people are very convinced of a causal drug effect in this case. Other examples were Bayesian methods make a huge difference is when your data tells you very little (or essentially nothing) about certain parameters in your model, but when there is prior information on these parameters. Especially when we do not know the exact value of the parameters, a Bayesian treatment often becomes important. E.g. a lot of phyisis constants such as the speed of light are known with so little error that for many purposes it makes very little difference whether we take a Bayesian approach that accounts for the uncertainty around them or whether plug in a constant. However, other quantities we know a decent amount about, but still have a non-negligible amount of uncertainty about. In those situations a Bayesian approach is a good way of propagating the uncertainty into an analysis.
Examples of Bayesian and frequentist approach giving different answers
A very obvious case where it makes a difference is when there is a relevant prior information, but we are trying to analyze a small dataset. An example that I found quite useful (and used in my thesis
Examples of Bayesian and frequentist approach giving different answers A very obvious case where it makes a difference is when there is a relevant prior information, but we are trying to analyze a small dataset. An example that I found quite useful (and used in my thesis, full details of the analysis below are given there) is the TGN1412 first in human trial. During that first-in-human trial of a monoclonal antibody (i.e. TGN1412), all 6 of 6 the healthy volunteers in the test group ended up in a critical care unit due to cytokine storms, while this occurred for 0 of the 2 placebo subjects. Adverse event TGN1412 (N=6) Placebo (N=2) Patient in ICU due to Cytokine storm 6/6 (100%) 0/2 (0%) As Stephen Senn pointed out, Fisher's exact test results in a one-sided p-value of 0.0357 (i.e. above 0.025). Also, if you do an exponential time-to-event model using exact Poisson regression, you get a median unbiased estimate of 1.05 (95% CI -0.62 to $\infty$) for the log-hazard ratio with a two-sided p-value of 0.3308. So, two somewhat reasonable frequentist analyses would normally be interpreted as (ignoring that this analysis was not pre-specified etc.) having not enough data to reject the null hypothesis that the drug increases the likelihood of the cytokine storms. Nevertheless, when people talk about this trial, you will not here any doubt that the drug caused these adverse events. Why is that? Let's use historical discharges or deaths from intensive care units from around then (2001, the trial was in 2006) for the UK (both trial and historical data). That's probably an upper bound for the expected rate, because this is the general population including more frail individuals (rather than the young and healthy volunteers in the trial) and the ICU stays were for any case rather than for cytokine storms. From that I get a Gamma(1270614; 4808670538) prior for the control group exponential rate per patient year (probably reasonable to use an exponential distribution). If I take a Cauchy(0, 0.25) prior for the log-hazard ratio of TGN1412 vs. placebo, then I get a posterior median log-hazard ratio of 12.2 (95% CrI 11.3 to 13.0) with a posterior probability in excess of 99.999% that TGN1412 increased the hazard rate for the admission to critical care. I.e. the prior knowledge on the rarity of the specic adverse event (a cytokine storm requiring admission to an intensive care unit) in young and healthy individuals means that these adverse events have been attributed to the TGN1412, because we a-priori would expect to see zero such cases with very high probability in such a short small trial. The fact that we saw 6 cases in the TGN1412 out of 6 patients is so implausible unless the drug caused it, that people are very convinced of a causal drug effect in this case. Other examples were Bayesian methods make a huge difference is when your data tells you very little (or essentially nothing) about certain parameters in your model, but when there is prior information on these parameters. Especially when we do not know the exact value of the parameters, a Bayesian treatment often becomes important. E.g. a lot of phyisis constants such as the speed of light are known with so little error that for many purposes it makes very little difference whether we take a Bayesian approach that accounts for the uncertainty around them or whether plug in a constant. However, other quantities we know a decent amount about, but still have a non-negligible amount of uncertainty about. In those situations a Bayesian approach is a good way of propagating the uncertainty into an analysis.
Examples of Bayesian and frequentist approach giving different answers A very obvious case where it makes a difference is when there is a relevant prior information, but we are trying to analyze a small dataset. An example that I found quite useful (and used in my thesis
3,747
Correlations between continuous and categorical (nominal) variables
The reviewer should have told you why the Spearman $\rho$ is not appropriate. Here is one version of that: Let the data be $(Z_i, I_i)$ where $Z$ is the measured variable and $I$ is the gender indicator, say it is 0 (man), 1 (woman). Then Spearman's $\rho$ is calculated based on the ranks of $Z, I$ respectively. Since there are only two possible values for the indicator $I$, there will be a lot of ties, so this formula is not appropriate. If you replace rank with mean rank, then you will get only two different values, one for men, another for women. Then $\rho$ will become basically some rescaled version of the mean ranks between the two groups. It would be simpler (more interpretable) to simply compare the means! Another approach is the following. Let $X_1, \dots, X_n$ be the observations of the continuous variable among men, $Y_1, \dots, Y_m$ same among women. Now, if the distribution of $X$ and of $Y$ are the same, then $P(X>Y)$ will be 0.5 (let's assume the distribution is purely absolutely continuous, so there are no ties). In the general case, define $$ \theta = P(X>Y) $$ where $X$ is a random draw among men, $Y$ among women. Can we estimate $\theta$ from our sample? Form all pairs $(X_i, Y_j)$ (assume no ties) and count for how many we have "man is larger" ($X_i > Y_j$)($M$) and for how many "woman is larger" ($ X_i < Y_j$) ($W$). Then one sample estimate of $\theta$ is $$ \frac{M}{M+W} $$ That is one reasonable measure of correlation! (If there are only a few ties, just ignore them). But I am not sure what that is called, if it has a name. This one may be close: https://en.wikipedia.org/wiki/Goodman_and_Kruskal%27s_gamma
Correlations between continuous and categorical (nominal) variables
The reviewer should have told you why the Spearman $\rho$ is not appropriate. Here is one version of that: Let the data be $(Z_i, I_i)$ where $Z$ is the measured variable and $I$ is the gender indic
Correlations between continuous and categorical (nominal) variables The reviewer should have told you why the Spearman $\rho$ is not appropriate. Here is one version of that: Let the data be $(Z_i, I_i)$ where $Z$ is the measured variable and $I$ is the gender indicator, say it is 0 (man), 1 (woman). Then Spearman's $\rho$ is calculated based on the ranks of $Z, I$ respectively. Since there are only two possible values for the indicator $I$, there will be a lot of ties, so this formula is not appropriate. If you replace rank with mean rank, then you will get only two different values, one for men, another for women. Then $\rho$ will become basically some rescaled version of the mean ranks between the two groups. It would be simpler (more interpretable) to simply compare the means! Another approach is the following. Let $X_1, \dots, X_n$ be the observations of the continuous variable among men, $Y_1, \dots, Y_m$ same among women. Now, if the distribution of $X$ and of $Y$ are the same, then $P(X>Y)$ will be 0.5 (let's assume the distribution is purely absolutely continuous, so there are no ties). In the general case, define $$ \theta = P(X>Y) $$ where $X$ is a random draw among men, $Y$ among women. Can we estimate $\theta$ from our sample? Form all pairs $(X_i, Y_j)$ (assume no ties) and count for how many we have "man is larger" ($X_i > Y_j$)($M$) and for how many "woman is larger" ($ X_i < Y_j$) ($W$). Then one sample estimate of $\theta$ is $$ \frac{M}{M+W} $$ That is one reasonable measure of correlation! (If there are only a few ties, just ignore them). But I am not sure what that is called, if it has a name. This one may be close: https://en.wikipedia.org/wiki/Goodman_and_Kruskal%27s_gamma
Correlations between continuous and categorical (nominal) variables The reviewer should have told you why the Spearman $\rho$ is not appropriate. Here is one version of that: Let the data be $(Z_i, I_i)$ where $Z$ is the measured variable and $I$ is the gender indic
3,748
Correlations between continuous and categorical (nominal) variables
I'm having the same issue now. I didn't see anyone reference this just yet, but I'm researching the Point-Biserial Correlation which is built off the Pearson correlation coefficient. It is mean for a continuous variable and a dichotomous variable. Quick read: https://statistics.laerd.com/spss-tutorials/point-biserial-correlation-using-spss-statistics.php I use R, but I find SPSS has great documentation.
Correlations between continuous and categorical (nominal) variables
I'm having the same issue now. I didn't see anyone reference this just yet, but I'm researching the Point-Biserial Correlation which is built off the Pearson correlation coefficient. It is mean for a
Correlations between continuous and categorical (nominal) variables I'm having the same issue now. I didn't see anyone reference this just yet, but I'm researching the Point-Biserial Correlation which is built off the Pearson correlation coefficient. It is mean for a continuous variable and a dichotomous variable. Quick read: https://statistics.laerd.com/spss-tutorials/point-biserial-correlation-using-spss-statistics.php I use R, but I find SPSS has great documentation.
Correlations between continuous and categorical (nominal) variables I'm having the same issue now. I didn't see anyone reference this just yet, but I'm researching the Point-Biserial Correlation which is built off the Pearson correlation coefficient. It is mean for a
3,749
Correlations between continuous and categorical (nominal) variables
It would seem that the most appropriate comparison would be to compare the medians (as it is non-normal) and distribution between the binary categories. I would suggest the non-parametric Mann-Whitney test...
Correlations between continuous and categorical (nominal) variables
It would seem that the most appropriate comparison would be to compare the medians (as it is non-normal) and distribution between the binary categories. I would suggest the non-parametric Mann-Whitney
Correlations between continuous and categorical (nominal) variables It would seem that the most appropriate comparison would be to compare the medians (as it is non-normal) and distribution between the binary categories. I would suggest the non-parametric Mann-Whitney test...
Correlations between continuous and categorical (nominal) variables It would seem that the most appropriate comparison would be to compare the medians (as it is non-normal) and distribution between the binary categories. I would suggest the non-parametric Mann-Whitney
3,750
Correlations between continuous and categorical (nominal) variables
For the specified problem, measuring the Area Under the Curve of a Receiver Operator Characteristic curve might help. I am not an expert in this so I try to keep it simple. Please comment on any error or wrong interpretation so I can change it. $x$ is your continuous variable. $y$ is your categorical. See how many True Positives and False Positives do you get if you choose a value of $x$ as being the threshold between positives and negatives (or male and female) and you compare this to the real labels. For e.g. you choose 7, then above $x$=7 are all female (1) and below $x$=7 all male (0). Compare this to the real labels and get the number of true positives and false positives of your prediction. Repeating the procedure explained above, from min($x$) to max($x$) you will generate the true positive and the false positive rates and then you can plot them like in the figure below and you can calculate the Area Under the Curve. The idea is that if there is no correlation between the variables, you will get the same ratio of true positives and true negatives for all values of $x$, nevertheless, if there is good correlation (and the same stands for anti-correlation) the ratio of true positives to true negatives will strongly vary as $x$ varies. The above statement is calulcated with the Area Under the Curve. Example of good correlation (right) and fair anti-correlation (left).
Correlations between continuous and categorical (nominal) variables
For the specified problem, measuring the Area Under the Curve of a Receiver Operator Characteristic curve might help. I am not an expert in this so I try to keep it simple. Please comment on any error
Correlations between continuous and categorical (nominal) variables For the specified problem, measuring the Area Under the Curve of a Receiver Operator Characteristic curve might help. I am not an expert in this so I try to keep it simple. Please comment on any error or wrong interpretation so I can change it. $x$ is your continuous variable. $y$ is your categorical. See how many True Positives and False Positives do you get if you choose a value of $x$ as being the threshold between positives and negatives (or male and female) and you compare this to the real labels. For e.g. you choose 7, then above $x$=7 are all female (1) and below $x$=7 all male (0). Compare this to the real labels and get the number of true positives and false positives of your prediction. Repeating the procedure explained above, from min($x$) to max($x$) you will generate the true positive and the false positive rates and then you can plot them like in the figure below and you can calculate the Area Under the Curve. The idea is that if there is no correlation between the variables, you will get the same ratio of true positives and true negatives for all values of $x$, nevertheless, if there is good correlation (and the same stands for anti-correlation) the ratio of true positives to true negatives will strongly vary as $x$ varies. The above statement is calulcated with the Area Under the Curve. Example of good correlation (right) and fair anti-correlation (left).
Correlations between continuous and categorical (nominal) variables For the specified problem, measuring the Area Under the Curve of a Receiver Operator Characteristic curve might help. I am not an expert in this so I try to keep it simple. Please comment on any error
3,751
Correlations between continuous and categorical (nominal) variables
I like to think of it in more practical terms. A simple use case for continuous vs. categorical comparison is when you want to analyze treatment vs. control in an experiment. If you show statistical significance between treatment and control that implies that the categorical value (Treatment vs. Control) does indeed affect the continuous variable. You can do this same thing with ANOVA metric when you have multiple treatment groups. I think this is the most practical way of evaluating whether your categorical variable in any way affects the distribution of the continuous value.
Correlations between continuous and categorical (nominal) variables
I like to think of it in more practical terms. A simple use case for continuous vs. categorical comparison is when you want to analyze treatment vs. control in an experiment. If you show statistical s
Correlations between continuous and categorical (nominal) variables I like to think of it in more practical terms. A simple use case for continuous vs. categorical comparison is when you want to analyze treatment vs. control in an experiment. If you show statistical significance between treatment and control that implies that the categorical value (Treatment vs. Control) does indeed affect the continuous variable. You can do this same thing with ANOVA metric when you have multiple treatment groups. I think this is the most practical way of evaluating whether your categorical variable in any way affects the distribution of the continuous value.
Correlations between continuous and categorical (nominal) variables I like to think of it in more practical terms. A simple use case for continuous vs. categorical comparison is when you want to analyze treatment vs. control in an experiment. If you show statistical s
3,752
Regression for an outcome (ratio or fraction) between 0 and 1
You should choose "hidden option c", where c is beta regression. This is a type of regression model that is appropriate when the response variable is distributed as Beta. You can think of it as analogous to a generalized linear model. It's exactly what you are looking for. There is a package in R called betareg which deals with this. I don't know if you use R, but even if you don't you could read the 'vignettes' anyway, they will give you general information about the topic in addition to how to implement it in R (which you wouldn't need in that case). Edit (much later): Let me make a quick clarification. I interpret the question as being about the ratio of two, positive, real values. If so, (and they are distributed as Gammas) that is a Beta distribution. However, if $a$ is a count of 'successes' out of a known total, $b$, of 'trials', then this would be a count proportion $a/b$, not a continuous proportion, and you should use binomial GLM (e.g., logistic regression). For how to do it in R, see e.g. How to do logistic regression in R when outcome is fractional (a ratio of two counts)? Another possibility is to use linear regression if the ratios can be transformed so as to meet the assumptions of a standard linear model, although I would not be optimistic about that actually working.
Regression for an outcome (ratio or fraction) between 0 and 1
You should choose "hidden option c", where c is beta regression. This is a type of regression model that is appropriate when the response variable is distributed as Beta. You can think of it as anal
Regression for an outcome (ratio or fraction) between 0 and 1 You should choose "hidden option c", where c is beta regression. This is a type of regression model that is appropriate when the response variable is distributed as Beta. You can think of it as analogous to a generalized linear model. It's exactly what you are looking for. There is a package in R called betareg which deals with this. I don't know if you use R, but even if you don't you could read the 'vignettes' anyway, they will give you general information about the topic in addition to how to implement it in R (which you wouldn't need in that case). Edit (much later): Let me make a quick clarification. I interpret the question as being about the ratio of two, positive, real values. If so, (and they are distributed as Gammas) that is a Beta distribution. However, if $a$ is a count of 'successes' out of a known total, $b$, of 'trials', then this would be a count proportion $a/b$, not a continuous proportion, and you should use binomial GLM (e.g., logistic regression). For how to do it in R, see e.g. How to do logistic regression in R when outcome is fractional (a ratio of two counts)? Another possibility is to use linear regression if the ratios can be transformed so as to meet the assumptions of a standard linear model, although I would not be optimistic about that actually working.
Regression for an outcome (ratio or fraction) between 0 and 1 You should choose "hidden option c", where c is beta regression. This is a type of regression model that is appropriate when the response variable is distributed as Beta. You can think of it as anal
3,753
Regression for an outcome (ratio or fraction) between 0 and 1
Are these paired samples or two independent populations? If independent populations, you might consider log(M) = log(B) + $X_i$*log(ratio). M is your measurement (a vector containing all values of A and B) and X is a vector $X_i$ = 1 if $M_i$ is a value of A, $X_i$ = 0 if $M_i$ is a value of B. Your intercept of this regression will be log(B) and your slope will be log(ratio). See more here: Beyene J, Moineddin R. Methods for confidence interval estimation of a ratio parameter with application to location quotients. BMC medical research methodology. 2005;5(1):32. EDIT: I have written an SPSS addon to do just this. I can share it if you're interested.
Regression for an outcome (ratio or fraction) between 0 and 1
Are these paired samples or two independent populations? If independent populations, you might consider log(M) = log(B) + $X_i$*log(ratio). M is your measurement (a vector containing all values of A
Regression for an outcome (ratio or fraction) between 0 and 1 Are these paired samples or two independent populations? If independent populations, you might consider log(M) = log(B) + $X_i$*log(ratio). M is your measurement (a vector containing all values of A and B) and X is a vector $X_i$ = 1 if $M_i$ is a value of A, $X_i$ = 0 if $M_i$ is a value of B. Your intercept of this regression will be log(B) and your slope will be log(ratio). See more here: Beyene J, Moineddin R. Methods for confidence interval estimation of a ratio parameter with application to location quotients. BMC medical research methodology. 2005;5(1):32. EDIT: I have written an SPSS addon to do just this. I can share it if you're interested.
Regression for an outcome (ratio or fraction) between 0 and 1 Are these paired samples or two independent populations? If independent populations, you might consider log(M) = log(B) + $X_i$*log(ratio). M is your measurement (a vector containing all values of A
3,754
Regression for an outcome (ratio or fraction) between 0 and 1
Not true. The data for logistic regression is binary 0 or 1 but the model predicts p say the probability of success given the predictors $X_i$, $i=1,2,..,k$ where $k$ is the number of predictor variables in the model. Actually because of the logit function the linear model predicts the value of log($\frac{p}{1-p}$). So to get the prediction for p you just do the inverse transformation $p=\frac{\exp(x)}{[1+\exp(x)]}$ where $x$ is the predicted logit.
Regression for an outcome (ratio or fraction) between 0 and 1
Not true. The data for logistic regression is binary 0 or 1 but the model predicts p say the probability of success given the predictors $X_i$, $i=1,2,..,k$ where $k$ is the number of predictor varia
Regression for an outcome (ratio or fraction) between 0 and 1 Not true. The data for logistic regression is binary 0 or 1 but the model predicts p say the probability of success given the predictors $X_i$, $i=1,2,..,k$ where $k$ is the number of predictor variables in the model. Actually because of the logit function the linear model predicts the value of log($\frac{p}{1-p}$). So to get the prediction for p you just do the inverse transformation $p=\frac{\exp(x)}{[1+\exp(x)]}$ where $x$ is the predicted logit.
Regression for an outcome (ratio or fraction) between 0 and 1 Not true. The data for logistic regression is binary 0 or 1 but the model predicts p say the probability of success given the predictors $X_i$, $i=1,2,..,k$ where $k$ is the number of predictor varia
3,755
Regression for an outcome (ratio or fraction) between 0 and 1
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. We can use sample_weights in SVM-C or any other classifier with weights being the ratio. There would be two data points for each data point in the original case: With 1 as target variable where sample_weight is equal to ratio With 0 as target variable where sample_weight is (1-ratio). Consider the effect of sample_weights here on SVM-C https://scikit-learn.org/stable/auto_examples/svm/plot_weighted_samples.html We can then use the predicted probability for target value 1 as our ratio estimate. Here weights refer to the sample_weights in the fit method of sklearn
Regression for an outcome (ratio or fraction) between 0 and 1
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Regression for an outcome (ratio or fraction) between 0 and 1 Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. We can use sample_weights in SVM-C or any other classifier with weights being the ratio. There would be two data points for each data point in the original case: With 1 as target variable where sample_weight is equal to ratio With 0 as target variable where sample_weight is (1-ratio). Consider the effect of sample_weights here on SVM-C https://scikit-learn.org/stable/auto_examples/svm/plot_weighted_samples.html We can then use the predicted probability for target value 1 as our ratio estimate. Here weights refer to the sample_weights in the fit method of sklearn
Regression for an outcome (ratio or fraction) between 0 and 1 Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
3,756
What does the inverse of covariance matrix say about data? (Intuitively)
It is a measure of precision just as $\Sigma$ is a measure of dispersion. More elaborately, $\Sigma$ is a measure of how the variables are dispersed around the mean (the diagonal elements) and how they co-vary with other variables (the off-diagonal) elements. The more the dispersion the farther apart they are from the mean and the more they co-vary (in absolute value) with the other variables the stronger is the tendency for them to 'move together' (in the same or opposite direction depending on the sign of the covariance). Similarly, $\Sigma^{-1}$ is a measure of how tightly clustered the variables are around the mean (the diagonal elements) and the extent to which they do not co-vary with the other variables (the off-diagonal elements). Thus, the higher the diagonal element, the tighter the variable is clustered around the mean. The interpretation of the off-diagonal elements is more subtle and I refer you to the other answers for that interpretation.
What does the inverse of covariance matrix say about data? (Intuitively)
It is a measure of precision just as $\Sigma$ is a measure of dispersion. More elaborately, $\Sigma$ is a measure of how the variables are dispersed around the mean (the diagonal elements) and how th
What does the inverse of covariance matrix say about data? (Intuitively) It is a measure of precision just as $\Sigma$ is a measure of dispersion. More elaborately, $\Sigma$ is a measure of how the variables are dispersed around the mean (the diagonal elements) and how they co-vary with other variables (the off-diagonal) elements. The more the dispersion the farther apart they are from the mean and the more they co-vary (in absolute value) with the other variables the stronger is the tendency for them to 'move together' (in the same or opposite direction depending on the sign of the covariance). Similarly, $\Sigma^{-1}$ is a measure of how tightly clustered the variables are around the mean (the diagonal elements) and the extent to which they do not co-vary with the other variables (the off-diagonal elements). Thus, the higher the diagonal element, the tighter the variable is clustered around the mean. The interpretation of the off-diagonal elements is more subtle and I refer you to the other answers for that interpretation.
What does the inverse of covariance matrix say about data? (Intuitively) It is a measure of precision just as $\Sigma$ is a measure of dispersion. More elaborately, $\Sigma$ is a measure of how the variables are dispersed around the mean (the diagonal elements) and how th
3,757
What does the inverse of covariance matrix say about data? (Intuitively)
Using superscripts to denote the elements of the inverse, $1/\sigma^{ii}$ is the variance of the component of variable $i$ that is uncorrelated with the $p-1$ other variables, and $-\sigma^{ij}/\sqrt{\sigma^{ii}\sigma^{jj}}$ is the partial correlation of variables $i$ and $j$, controlling for the $p-2$ other variables.
What does the inverse of covariance matrix say about data? (Intuitively)
Using superscripts to denote the elements of the inverse, $1/\sigma^{ii}$ is the variance of the component of variable $i$ that is uncorrelated with the $p-1$ other variables, and $-\sigma^{ij}/\sqrt{
What does the inverse of covariance matrix say about data? (Intuitively) Using superscripts to denote the elements of the inverse, $1/\sigma^{ii}$ is the variance of the component of variable $i$ that is uncorrelated with the $p-1$ other variables, and $-\sigma^{ij}/\sqrt{\sigma^{ii}\sigma^{jj}}$ is the partial correlation of variables $i$ and $j$, controlling for the $p-2$ other variables.
What does the inverse of covariance matrix say about data? (Intuitively) Using superscripts to denote the elements of the inverse, $1/\sigma^{ii}$ is the variance of the component of variable $i$ that is uncorrelated with the $p-1$ other variables, and $-\sigma^{ij}/\sqrt{
3,758
How can I determine which of two sequences of coin flips is real and which is fake?
This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly hinting that they don't really have to flip a coin and assuring them it won't be graded. Most will eschew the physical process and just write down 100 H's and T's willy-nilly. After the results are handed in at the beginning of the next class, at a glance I can reliably identify the ones who cheated. Usually there are no runs of heads or tails longer than about 4 or 5, even though in just 100 flips we ought to see a longer run that that. This case is subtler, but one particular analysis stands out as convincing: tabulate the successive ordered pairs of results. In a series of independent flips, each of the four possible pairs HH, HT, TH, and TT should occur equally often--which would be $(300-1)/4 = 74.75$ times each, on average. Here are the tabulations for the two series of flips: Series 1 Series 2 H T H T H 46 102 71 76 T 102 49 77 75 The first is obviously far from what we might expect. In that series, an H is more than twice as likely ($102:46$) to be followed by a T than by another H; and a T, in turn, is more than twice as likely ($102:49$) to be followed by an H. In the second series, those likelihoods are nearly $1:1,$ consistent with independent flips. A chi-squared test works well here, because all the expected counts are far greater than the threshold of 5 often quoted as a minimum. The chi-squared statistics are 38.3 and 0.085, respectively, corresponding to p-values of less than one in a billion and 77%, respectively. In other words, a table of pairs as imbalanced as the second one is to be expected (due to the randomness), but a table as imbalanced as the first happens less than one in every billion such experiments. (NB: It has been pointed out in comments that the chi-squared test might not be applicable because these transitions are not independent: e.g., an HT can be followed only by a TT or TH. This is a legitimate concern. However, this form of dependence is extremely weak and has little appreciable effect on the null distribution of the chi-squared statistic for sequences as long as $300.$ In fact, the chi-squared distribution is a great approximation to the null sampling distribution even for sequences as short as $21,$ where the counts of the $21-1=20$ transitions that occur are expected to be $20/4=5$ of each type.) If you know nothing about chi-squared tests, or even if you do but don't want to program the chi-square quantile function to compute a p-value, you can achieve a similar result. First develop a way to quantify the degree of imbalance in a $2\times 2$ table like this. (There are many ways, but all the reasonable ones are equivalent.) Then generate, say, a few hundred such tables randomly (by flipping coins--in the computer, of course!). Compare the imbalances of these two tables to the range of imbalances generated randomly. You will find the first sequence is far outside the range while the second is squarely within it. This figure summarizes such a simulation using the chi-squared statistic as the measure of imbalance. Both panels show the same results: one on the original scale and the other on a log scale. The two dashed vertical lines in each panel show the chi-squared statistics for Series 1 (right) and Series 2 (left). The red curve is the $\chi^2(1)$ density. It fits the simulations extremely well at the right (higher values). The discrepancies for low values occur because this statistic has a discrete distribution which cannot be well approximated by any continuous distribution where it takes on small values -- but for our purposes that makes no difference at all.
How can I determine which of two sequences of coin flips is real and which is fake?
This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly h
How can I determine which of two sequences of coin flips is real and which is fake? This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly hinting that they don't really have to flip a coin and assuring them it won't be graded. Most will eschew the physical process and just write down 100 H's and T's willy-nilly. After the results are handed in at the beginning of the next class, at a glance I can reliably identify the ones who cheated. Usually there are no runs of heads or tails longer than about 4 or 5, even though in just 100 flips we ought to see a longer run that that. This case is subtler, but one particular analysis stands out as convincing: tabulate the successive ordered pairs of results. In a series of independent flips, each of the four possible pairs HH, HT, TH, and TT should occur equally often--which would be $(300-1)/4 = 74.75$ times each, on average. Here are the tabulations for the two series of flips: Series 1 Series 2 H T H T H 46 102 71 76 T 102 49 77 75 The first is obviously far from what we might expect. In that series, an H is more than twice as likely ($102:46$) to be followed by a T than by another H; and a T, in turn, is more than twice as likely ($102:49$) to be followed by an H. In the second series, those likelihoods are nearly $1:1,$ consistent with independent flips. A chi-squared test works well here, because all the expected counts are far greater than the threshold of 5 often quoted as a minimum. The chi-squared statistics are 38.3 and 0.085, respectively, corresponding to p-values of less than one in a billion and 77%, respectively. In other words, a table of pairs as imbalanced as the second one is to be expected (due to the randomness), but a table as imbalanced as the first happens less than one in every billion such experiments. (NB: It has been pointed out in comments that the chi-squared test might not be applicable because these transitions are not independent: e.g., an HT can be followed only by a TT or TH. This is a legitimate concern. However, this form of dependence is extremely weak and has little appreciable effect on the null distribution of the chi-squared statistic for sequences as long as $300.$ In fact, the chi-squared distribution is a great approximation to the null sampling distribution even for sequences as short as $21,$ where the counts of the $21-1=20$ transitions that occur are expected to be $20/4=5$ of each type.) If you know nothing about chi-squared tests, or even if you do but don't want to program the chi-square quantile function to compute a p-value, you can achieve a similar result. First develop a way to quantify the degree of imbalance in a $2\times 2$ table like this. (There are many ways, but all the reasonable ones are equivalent.) Then generate, say, a few hundred such tables randomly (by flipping coins--in the computer, of course!). Compare the imbalances of these two tables to the range of imbalances generated randomly. You will find the first sequence is far outside the range while the second is squarely within it. This figure summarizes such a simulation using the chi-squared statistic as the measure of imbalance. Both panels show the same results: one on the original scale and the other on a log scale. The two dashed vertical lines in each panel show the chi-squared statistics for Series 1 (right) and Series 2 (left). The red curve is the $\chi^2(1)$ density. It fits the simulations extremely well at the right (higher values). The discrepancies for low values occur because this statistic has a discrete distribution which cannot be well approximated by any continuous distribution where it takes on small values -- but for our purposes that makes no difference at all.
How can I determine which of two sequences of coin flips is real and which is fake? This is a variant on a standard intro stats demonstration: for homework after the first class I have assigned my students the exercise of flipping a coin 100 times and recording the results, broadly h
3,759
How can I determine which of two sequences of coin flips is real and which is fake?
There are two very good answers as of writing this, and so let me add a needlessly complex yet interesting approach to this problem. I think one way to operationalize the human generated vs truly random question is to ask if the flips are autocorrelated. The hypothesis here being that humans will attempt to appear random by not having too many strings of one outcome, hence switching from heads to tails and tails to heads more often than would be observed in a truly random sequence. Whuber examines this nicely with a 2x2 table, but because I am a Bayesian and a glutton for punishment let's write a simple model in Stan to estimate the lag-1 autocorrelation of the flips. Speaking of Whuber, he has nicely laid out the data generating process in this post. You can read his answer to understand the data generating process. Let $\rho$ be the lag 1 autocorrelation of the flips, and let $q$ be the proportion of flips which are heads in the sequence. A fair coin should have 0 autocorrelation, so we are looking for our estimate of $\rho$ to be close to 0. From there, we only need to count the number of occurrences of $H,H$, $H, T$, $T, H$ and $T,T$ in the sequence. The Stan model is shown below data{ int y_1_1; //number of concurrent 1s int y_0_1; //number of 0,1 occurrences int y_1_0; //number of 1,0 occurrences int y_0_0; //number of concurrent 0s } parameters{ real<lower=-1, upper=1> rho; real<lower=0, upper=1> q; } transformed parameters{ real<lower=0, upper=1> prob_1_1 = q + rho*(1-q); real<lower=0, upper=1> prob_0_1 = (1-q)*(1-rho); real<lower=0, upper=1> prob_1_0 = q*(1-rho); real<lower=0, upper=1> prob_0_0 = 1 - q + rho*q; } model{ q ~ beta(1, 1); target += y_1_1 * bernoulli_lpmf(1| prob_1_1); target += y_0_1 * bernoulli_lpmf(1| prob_0_1); target += y_1_0 * bernoulli_lpmf(1| prob_1_0); target += y_0_0 * bernoulli_lpmf(1| prob_0_0); } Here, I've placed a uniform prior on the autocorrelation $$ \rho \sim \mbox{Uniform}(-1, 1) $$ and on the probability of a head $$ q \sim \operatorname{Beta}(1, 1) $$ Our likelihood is Bernoulli, and I have weighted the likelihood by the number of occurrences of each pair of outcomes. The probabilities of each outcome (e.g. probability of observing a heads conditioned on the previous flip being a heads) is provided by Whuber in his linked answer. Let's run our model and compare posterior distributions for the two sequences The estimated auto correlation for sequence 1 is -0.36, and the estimated autocorrelation for sequence 2 is -0.02 (close enough to 0). If I was a betting man, I'd put my money on sequence 1 being the sequence generated by a human. The negative autocorrelation means that when we see a heads/tails we are more likely to see a tails/heads! This observation lines up nicely with the 2x2 table provided by Whuber. Code The plot I present is made in R, but here is some python code to do the same thing since you asked import matplotlib.pyplot as plt import cmdstanpy # You will need to install cmdstanpy prior to running this code # Write the stan model as a string. We will then write it to a file stan_code = ''' data{ int y_1_1; //number of concurrent 1s int y_0_1; //number of 0,1 occurences int y_1_0; //number of 1,0 occurences int y_0_0; //number of concurrent 0s } parameters{ real<lower=-1, upper=1> rho; real<lower=0, upper=1> q; } transformed parameters{ real<lower=0, upper=1> prob_1_1 = q + rho*(1-q); real<lower=0, upper=1> prob_0_1 = (1-q)*(1-rho); real<lower=0, upper=1> prob_1_0 = q*(1-rho); real<lower=0, upper=1> prob_0_0 = 1 - q + rho*q; } model{ q ~ beta(1, 1); target += y_1_1 * bernoulli_lpmf(1| prob_1_1); target += y_0_1 * bernoulli_lpmf(1| prob_0_1); target += y_1_0 * bernoulli_lpmf(1| prob_1_0); target += y_0_0 * bernoulli_lpmf(1| prob_0_0); } ''' # Write the model to a temp file with open('model_file.stan', 'w') as model_file: model_file.write(stan_code) # Compile the model model = cmdstanpy.CmdStanModel(stan_file='model_file.stan', compile=True) # Co-occuring counts for heads (1) and tails (0) for each sequence data_1 = dict(y_1_1 = 46, y_0_0 = 49, y_0_1 = 102, y_1_0 = 102) data_2 = dict(y_1_1 = 71, y_0_0 = 75, y_0_1 = 76, y_1_0 = 77) # Fit each model fit_1 = model.sample(data_1, show_progress=False) rho_1 = fit_1.stan_variable('rho') fit_2 = model.sample(data_2, show_progress=False) rho_2 = fit_2.stan_variable('rho') # Make a pretty plot fig, ax = plt.subplots(dpi = 240, figsize = (5, 3)) ax.set_xlim(-1, 1) ax.hist(rho_1, color = 'blue', alpha = 0.5, edgecolor='k', label='Sequence 1') ax.hist(rho_2, color = 'red', alpha = 0.5, edgecolor='k', label='Sequence 2') ax.legend() ```
How can I determine which of two sequences of coin flips is real and which is fake?
There are two very good answers as of writing this, and so let me add a needlessly complex yet interesting approach to this problem. I think one way to operationalize the human generated vs truly ran
How can I determine which of two sequences of coin flips is real and which is fake? There are two very good answers as of writing this, and so let me add a needlessly complex yet interesting approach to this problem. I think one way to operationalize the human generated vs truly random question is to ask if the flips are autocorrelated. The hypothesis here being that humans will attempt to appear random by not having too many strings of one outcome, hence switching from heads to tails and tails to heads more often than would be observed in a truly random sequence. Whuber examines this nicely with a 2x2 table, but because I am a Bayesian and a glutton for punishment let's write a simple model in Stan to estimate the lag-1 autocorrelation of the flips. Speaking of Whuber, he has nicely laid out the data generating process in this post. You can read his answer to understand the data generating process. Let $\rho$ be the lag 1 autocorrelation of the flips, and let $q$ be the proportion of flips which are heads in the sequence. A fair coin should have 0 autocorrelation, so we are looking for our estimate of $\rho$ to be close to 0. From there, we only need to count the number of occurrences of $H,H$, $H, T$, $T, H$ and $T,T$ in the sequence. The Stan model is shown below data{ int y_1_1; //number of concurrent 1s int y_0_1; //number of 0,1 occurrences int y_1_0; //number of 1,0 occurrences int y_0_0; //number of concurrent 0s } parameters{ real<lower=-1, upper=1> rho; real<lower=0, upper=1> q; } transformed parameters{ real<lower=0, upper=1> prob_1_1 = q + rho*(1-q); real<lower=0, upper=1> prob_0_1 = (1-q)*(1-rho); real<lower=0, upper=1> prob_1_0 = q*(1-rho); real<lower=0, upper=1> prob_0_0 = 1 - q + rho*q; } model{ q ~ beta(1, 1); target += y_1_1 * bernoulli_lpmf(1| prob_1_1); target += y_0_1 * bernoulli_lpmf(1| prob_0_1); target += y_1_0 * bernoulli_lpmf(1| prob_1_0); target += y_0_0 * bernoulli_lpmf(1| prob_0_0); } Here, I've placed a uniform prior on the autocorrelation $$ \rho \sim \mbox{Uniform}(-1, 1) $$ and on the probability of a head $$ q \sim \operatorname{Beta}(1, 1) $$ Our likelihood is Bernoulli, and I have weighted the likelihood by the number of occurrences of each pair of outcomes. The probabilities of each outcome (e.g. probability of observing a heads conditioned on the previous flip being a heads) is provided by Whuber in his linked answer. Let's run our model and compare posterior distributions for the two sequences The estimated auto correlation for sequence 1 is -0.36, and the estimated autocorrelation for sequence 2 is -0.02 (close enough to 0). If I was a betting man, I'd put my money on sequence 1 being the sequence generated by a human. The negative autocorrelation means that when we see a heads/tails we are more likely to see a tails/heads! This observation lines up nicely with the 2x2 table provided by Whuber. Code The plot I present is made in R, but here is some python code to do the same thing since you asked import matplotlib.pyplot as plt import cmdstanpy # You will need to install cmdstanpy prior to running this code # Write the stan model as a string. We will then write it to a file stan_code = ''' data{ int y_1_1; //number of concurrent 1s int y_0_1; //number of 0,1 occurences int y_1_0; //number of 1,0 occurences int y_0_0; //number of concurrent 0s } parameters{ real<lower=-1, upper=1> rho; real<lower=0, upper=1> q; } transformed parameters{ real<lower=0, upper=1> prob_1_1 = q + rho*(1-q); real<lower=0, upper=1> prob_0_1 = (1-q)*(1-rho); real<lower=0, upper=1> prob_1_0 = q*(1-rho); real<lower=0, upper=1> prob_0_0 = 1 - q + rho*q; } model{ q ~ beta(1, 1); target += y_1_1 * bernoulli_lpmf(1| prob_1_1); target += y_0_1 * bernoulli_lpmf(1| prob_0_1); target += y_1_0 * bernoulli_lpmf(1| prob_1_0); target += y_0_0 * bernoulli_lpmf(1| prob_0_0); } ''' # Write the model to a temp file with open('model_file.stan', 'w') as model_file: model_file.write(stan_code) # Compile the model model = cmdstanpy.CmdStanModel(stan_file='model_file.stan', compile=True) # Co-occuring counts for heads (1) and tails (0) for each sequence data_1 = dict(y_1_1 = 46, y_0_0 = 49, y_0_1 = 102, y_1_0 = 102) data_2 = dict(y_1_1 = 71, y_0_0 = 75, y_0_1 = 76, y_1_0 = 77) # Fit each model fit_1 = model.sample(data_1, show_progress=False) rho_1 = fit_1.stan_variable('rho') fit_2 = model.sample(data_2, show_progress=False) rho_2 = fit_2.stan_variable('rho') # Make a pretty plot fig, ax = plt.subplots(dpi = 240, figsize = (5, 3)) ax.set_xlim(-1, 1) ax.hist(rho_1, color = 'blue', alpha = 0.5, edgecolor='k', label='Sequence 1') ax.hist(rho_2, color = 'red', alpha = 0.5, edgecolor='k', label='Sequence 2') ax.legend() ```
How can I determine which of two sequences of coin flips is real and which is fake? There are two very good answers as of writing this, and so let me add a needlessly complex yet interesting approach to this problem. I think one way to operationalize the human generated vs truly ran
3,760
How can I determine which of two sequences of coin flips is real and which is fake?
This is a class activity I've first read about in the book Teaching Statistics. A Bag of Tricks, 2nd ed. by Andrew Gelman and Deborah Nolan (they recommend 100 flips, though). Their reasoning to detect the fabricated sequence is based on the combination of the longest run and the number of runs. For the following plot, I simulated 5000 fair coin tosses of length 300 and plotted the longest run on the y-axis and the number of runs on the x-axis (I once asked a question about the explicit joint probability). Each dot represents the result of 300 fair flips. For better visibility, the points are jittered. The numbers for the two sequences are plotted in color. The conclusion is obvious. For a quick calculation, recall that a rule of thumb for the longest run of either heads or tails in $n$ tosses is$^{[1]}$ $l = \log_{1/p}(n(1-p)) + 1$. For an approximate 95% prediction interval, just add and subtract $3$ from this value. Surprisingly, this number (i.e. $\pm 3$) does not depend on $n$! Applied to a fair coin with $n=300, p=1/2$, we have $l=\log_2(300/2) + 1=8.22$. So we expect the longest run to be round $8$ and reasonably in the range of $8\pm 3$, so between $5$ and $11$. The longest run in sequence 2 is $8$, whereas it is $4$ in sequence 1. As this is outside the approximate prediction interval, we'd conclude that sequence 1 is suspicious under the assumption of $p=1/2$. $[1]$ Schilling MF (2012): The Surprising Predictability of Long Runs. Math. Mag. 85: 141-149. (link)
How can I determine which of two sequences of coin flips is real and which is fake?
This is a class activity I've first read about in the book Teaching Statistics. A Bag of Tricks, 2nd ed. by Andrew Gelman and Deborah Nolan (they recommend 100 flips, though). Their reasoning to detec
How can I determine which of two sequences of coin flips is real and which is fake? This is a class activity I've first read about in the book Teaching Statistics. A Bag of Tricks, 2nd ed. by Andrew Gelman and Deborah Nolan (they recommend 100 flips, though). Their reasoning to detect the fabricated sequence is based on the combination of the longest run and the number of runs. For the following plot, I simulated 5000 fair coin tosses of length 300 and plotted the longest run on the y-axis and the number of runs on the x-axis (I once asked a question about the explicit joint probability). Each dot represents the result of 300 fair flips. For better visibility, the points are jittered. The numbers for the two sequences are plotted in color. The conclusion is obvious. For a quick calculation, recall that a rule of thumb for the longest run of either heads or tails in $n$ tosses is$^{[1]}$ $l = \log_{1/p}(n(1-p)) + 1$. For an approximate 95% prediction interval, just add and subtract $3$ from this value. Surprisingly, this number (i.e. $\pm 3$) does not depend on $n$! Applied to a fair coin with $n=300, p=1/2$, we have $l=\log_2(300/2) + 1=8.22$. So we expect the longest run to be round $8$ and reasonably in the range of $8\pm 3$, so between $5$ and $11$. The longest run in sequence 2 is $8$, whereas it is $4$ in sequence 1. As this is outside the approximate prediction interval, we'd conclude that sequence 1 is suspicious under the assumption of $p=1/2$. $[1]$ Schilling MF (2012): The Surprising Predictability of Long Runs. Math. Mag. 85: 141-149. (link)
How can I determine which of two sequences of coin flips is real and which is fake? This is a class activity I've first read about in the book Teaching Statistics. A Bag of Tricks, 2nd ed. by Andrew Gelman and Deborah Nolan (they recommend 100 flips, though). Their reasoning to detec
3,761
How can I determine which of two sequences of coin flips is real and which is fake?
The runs test (NIST page) is a nonparametric test designed to identify unusual frequencies of runs. If we observe $n_1$ heads and $n_2$ tails, the expected value and variance of the number of runs are: $$\mu = {2n_1n_2 \over n_1+n_2} + 1$$ $$\sigma^2 = {2n_1n_2(2n_1n_2 - n_1 - n_2) \over (n_1+n_2)^2(n_1+n_2+1)}$$ As a rule of thumb, for $n_1, n_2 \geq 10$ the distribution of the observed number of runs is reasonably well-approximated by a Normal distribution. Edit: (incorporating Eric Duminil's work below) For sequence 1, we have 148 heads, 152 tails, 205 runs, and for sequence 2, we have 148 heads, 152 tails and 154 runs. Plugging these numbers into our formulae above give us $z$-scores of 6.5 for the first sequence and 0.58 for the second sequence - extremely strong evidence that the first sequence is fake. When people fake sequences like this, they tend to greatly underestimate the probability of longer runs, so they don't create as many long(ish) runs as they should. This in turn tends to increase the number of runs beyond that which would be expected. Consequently, when testing for faked data, we might prefer a one-sided test of the alternative hypothesis that there are "too many" runs vs. the null hypothesis that the number of runs is average - at least if we think the sequence was created by a human being.
How can I determine which of two sequences of coin flips is real and which is fake?
The runs test (NIST page) is a nonparametric test designed to identify unusual frequencies of runs. If we observe $n_1$ heads and $n_2$ tails, the expected value and variance of the number of runs ar
How can I determine which of two sequences of coin flips is real and which is fake? The runs test (NIST page) is a nonparametric test designed to identify unusual frequencies of runs. If we observe $n_1$ heads and $n_2$ tails, the expected value and variance of the number of runs are: $$\mu = {2n_1n_2 \over n_1+n_2} + 1$$ $$\sigma^2 = {2n_1n_2(2n_1n_2 - n_1 - n_2) \over (n_1+n_2)^2(n_1+n_2+1)}$$ As a rule of thumb, for $n_1, n_2 \geq 10$ the distribution of the observed number of runs is reasonably well-approximated by a Normal distribution. Edit: (incorporating Eric Duminil's work below) For sequence 1, we have 148 heads, 152 tails, 205 runs, and for sequence 2, we have 148 heads, 152 tails and 154 runs. Plugging these numbers into our formulae above give us $z$-scores of 6.5 for the first sequence and 0.58 for the second sequence - extremely strong evidence that the first sequence is fake. When people fake sequences like this, they tend to greatly underestimate the probability of longer runs, so they don't create as many long(ish) runs as they should. This in turn tends to increase the number of runs beyond that which would be expected. Consequently, when testing for faked data, we might prefer a one-sided test of the alternative hypothesis that there are "too many" runs vs. the null hypothesis that the number of runs is average - at least if we think the sequence was created by a human being.
How can I determine which of two sequences of coin flips is real and which is fake? The runs test (NIST page) is a nonparametric test designed to identify unusual frequencies of runs. If we observe $n_1$ heads and $n_2$ tails, the expected value and variance of the number of runs ar
3,762
How can I determine which of two sequences of coin flips is real and which is fake?
Here's an empirical approach, based on compression as a proxy for algorithmic complexity: import bz2 import random import statistics s1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" s2 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" def compressed_len(s): return len(bz2.compress(s.encode())) trials = [] for x in range(100000): sr = "".join(random.choice("HT") for _ in range(300)) trials.append(compressed_len(sr)) mean = statistics.mean(trials) stddev = statistics.stdev(trials) print("Random trials:") print("Mean:", mean) print("Stddev:", stddev) l1 = compressed_len(s1) l2 = compressed_len(s2) o1 = l1 - mean o2 = l2 - mean d1 = o1 / stddev d2 = o2 / stddev print("Selected trials:") print("Seq", "Len", "Dev", sep="\t") print("S1", l1, d1, sep="\t") print("S2", l2, d2, sep="\t") Roughly speaking: Compress a bunch (100k) of random coinflips. Observe the resulting length distribution. (In this case I'm approximating it as a normal distribution of lengths; a more thorough analysis would check and pick an appropriate distribution instead of blithely assuming normality.) Compress the input sequences. Compare with observed distribution. Result (note: not exactly reproducible due to the use of random trials; if you want to be reproducible add a random seed): Random trials: Mean: 105.05893 Stddev: 2.6729774976956002 Selected trials: Seq Len Dev S1 88 -6.381995364609942 S2 109 1.4744119632124217 Based on this, I'd say that S1 is the non-random one here. 6.38 standard deviations below the mean is rather improbable. The nice thing about this approach is that it's relatively generic, and takes advantage of the pre-existing work of a bunch of smart people. Just be aware of its limitations and quirks: You want a compression algorithm that's designed for space over compression speed. BZ2 works well enough here. This doesn't work if the compression algorithm simply gives up and writes a raw block to the output. A null result does not mean that the sequence is random. It means that this compression algorithm is unable to distinguish this input from random.
How can I determine which of two sequences of coin flips is real and which is fake?
Here's an empirical approach, based on compression as a proxy for algorithmic complexity: import bz2 import random import statistics s1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTT
How can I determine which of two sequences of coin flips is real and which is fake? Here's an empirical approach, based on compression as a proxy for algorithmic complexity: import bz2 import random import statistics s1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" s2 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" def compressed_len(s): return len(bz2.compress(s.encode())) trials = [] for x in range(100000): sr = "".join(random.choice("HT") for _ in range(300)) trials.append(compressed_len(sr)) mean = statistics.mean(trials) stddev = statistics.stdev(trials) print("Random trials:") print("Mean:", mean) print("Stddev:", stddev) l1 = compressed_len(s1) l2 = compressed_len(s2) o1 = l1 - mean o2 = l2 - mean d1 = o1 / stddev d2 = o2 / stddev print("Selected trials:") print("Seq", "Len", "Dev", sep="\t") print("S1", l1, d1, sep="\t") print("S2", l2, d2, sep="\t") Roughly speaking: Compress a bunch (100k) of random coinflips. Observe the resulting length distribution. (In this case I'm approximating it as a normal distribution of lengths; a more thorough analysis would check and pick an appropriate distribution instead of blithely assuming normality.) Compress the input sequences. Compare with observed distribution. Result (note: not exactly reproducible due to the use of random trials; if you want to be reproducible add a random seed): Random trials: Mean: 105.05893 Stddev: 2.6729774976956002 Selected trials: Seq Len Dev S1 88 -6.381995364609942 S2 109 1.4744119632124217 Based on this, I'd say that S1 is the non-random one here. 6.38 standard deviations below the mean is rather improbable. The nice thing about this approach is that it's relatively generic, and takes advantage of the pre-existing work of a bunch of smart people. Just be aware of its limitations and quirks: You want a compression algorithm that's designed for space over compression speed. BZ2 works well enough here. This doesn't work if the compression algorithm simply gives up and writes a raw block to the output. A null result does not mean that the sequence is random. It means that this compression algorithm is unable to distinguish this input from random.
How can I determine which of two sequences of coin flips is real and which is fake? Here's an empirical approach, based on compression as a proxy for algorithmic complexity: import bz2 import random import statistics s1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTT
3,763
How can I determine which of two sequences of coin flips is real and which is fake?
This is probably an overcomplicated way of looking at it, but for me it's fun, so I present to you... Moran's I Now, Moran's I was developed to look at spatial autocorrelation (basically autocorrelation with multiple dimensions), but it can be applied to the 1-dimensional case as well. Some of my interpretations might be a little sketchy, but you can consider your coin flips this way. To summarize Moran's I, it will consider your neighboring values using a pre-defined matrix. How you define the matrix is up to you, but it can actually be used to consider not just the directly neighboring values, but any values beyond that. Moran's I will produce a value ranging from -1 (perfectly dispersed values) to 1 (perfectly clustered values), with 0 being random. I wrote up some quick R code. First, setup the data (OP's data and a couple generated data sets to test dispersion and clustering): seq1 = unlist(strsplit("TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT", split = "")) seq2 = unlist(strsplit("HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT", split = "")) # Alternate T and H. E.g., THTHTHTHT.... # 'perfectly dispersed' # Moran's I = -1 seq3 = rep(c("T", "H"), times = 50) # 50 of T followed by 50 of H # 'perfectly clustered' # Moran's I approaches 1 as the sample size increases to infinity seq4 = rep(c("T", "H"), each = 50) # weights must be a vector with an odd length and the middle value set to 0 # weights are relative and do not have to add to 1 moran <- function(x, weights) { x = c(`T` = 0, `H` = 1)[x] # convert T/H to 0/1 N = length(x) x_mean = mean(x) den = sum((x - x_mean)^2) W = 0 num = 0 offset = floor(length(weights)/2) x_padded = c(rep(NA, 10), x, rep(NA,10)) # padding for sliding windown for (i in 1:length(x)) { x_slice = x_padded[(i+10-offset):(i+10+offset)] W = W + as.numeric(!is.na(x_slice)) %*% weights num = num + (x[i] - x_mean) * sum((x_slice - x_mean) * weights, na.rm = TRUE) } return(unname((N * num)/(as.numeric(W) * den))) } Next, I test the generated data sets to illustrate/test that my function is working correctly: # Test the 'perfect dispersion' scenario (should be -1) moran(seq3, c(1, 0, 1)) ## [1] -1 # Test the 'perfect clustering' scenario (should be ~1) moran(seq4, c(1, 0, 1)) ## [1] 0.979798 Now, let's look at OP's sequences: # Simple look at seq1. The weights test the idea that the current flip # is based purely on the last flip (a reasonable model for how a person might react) moran(seq1, c(1, 0, 0)) ## [1] -0.3647031 moran(seq2, c(1, 0, 0)) ## [1] -0.02359453 I'm defining my weights matrix such that only the previous flip is considered when testing for autocorrelation. We see that the second sequence is very close to 0 (random), whereas the first sequence seems to lean somewhat toward overdispersion. But maybe we think someone faking coin flips would consider the last two flips, not just the most recent: # Maybe the person is looking back at the last two flips moran(seq1, c(1, 1, 0, 0, 0)) ## [1] -0.1726056 moran(seq2, c(1, 1, 0, 0, 0)) ## [1] 0.0249505 The second sequence is just as close to 0 as before, but the first sequence had a pretty noticeable shift towards 0. This might be interpretable in a couple of different ways. First, if we know that the first sequence is fake, then maybe it means the person wasn't considering two flips back. A second interpretation is that maybe they were considering the last two flips, and somehow this led them to doing a better job at faking randomization. A third option might just be sheer dumb luck at faking the randomization. Now, maybe the person considers the last two coin flips but gives the most recent flip more importance. # Same idea, but maybe the more recent of the two is twice as important moran(seq1, c(1, 2, 0, 0, 0)) ## [1] -0.2367095 moran(seq2, c(1, 2, 0, 0, 0)) ## [1] 0.008750762 Here, we see the two sequences react differently. The second sequence (already pretty close to 0), gets noticeably closer to 0, whereas the first sequence shifts noticeably away. I'm not sure I want to try and interpret this, but it's an interesting result, and a similar thing happens if we try to model a scenario where the person is not only considering their previous flips but also thinking ahead to their next flip: # Maybe the person was thinking ahead to their next flip as well moran(seq1, c(1, 2, 0, 1, 0)) ## [1] -0.2687347 moran(seq2, c(1, 2, 0, 1, 0)) ## [1] 0.0006576715 Some of my application/interpretation of Moran's I to the coin flip problem might be a little off, but it's definitely an applicable measure to use. A related metric is Geary's C, which is more sensitive to local autocorrelation
How can I determine which of two sequences of coin flips is real and which is fake?
This is probably an overcomplicated way of looking at it, but for me it's fun, so I present to you... Moran's I Now, Moran's I was developed to look at spatial autocorrelation (basically autocorrelati
How can I determine which of two sequences of coin flips is real and which is fake? This is probably an overcomplicated way of looking at it, but for me it's fun, so I present to you... Moran's I Now, Moran's I was developed to look at spatial autocorrelation (basically autocorrelation with multiple dimensions), but it can be applied to the 1-dimensional case as well. Some of my interpretations might be a little sketchy, but you can consider your coin flips this way. To summarize Moran's I, it will consider your neighboring values using a pre-defined matrix. How you define the matrix is up to you, but it can actually be used to consider not just the directly neighboring values, but any values beyond that. Moran's I will produce a value ranging from -1 (perfectly dispersed values) to 1 (perfectly clustered values), with 0 being random. I wrote up some quick R code. First, setup the data (OP's data and a couple generated data sets to test dispersion and clustering): seq1 = unlist(strsplit("TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT", split = "")) seq2 = unlist(strsplit("HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT", split = "")) # Alternate T and H. E.g., THTHTHTHT.... # 'perfectly dispersed' # Moran's I = -1 seq3 = rep(c("T", "H"), times = 50) # 50 of T followed by 50 of H # 'perfectly clustered' # Moran's I approaches 1 as the sample size increases to infinity seq4 = rep(c("T", "H"), each = 50) # weights must be a vector with an odd length and the middle value set to 0 # weights are relative and do not have to add to 1 moran <- function(x, weights) { x = c(`T` = 0, `H` = 1)[x] # convert T/H to 0/1 N = length(x) x_mean = mean(x) den = sum((x - x_mean)^2) W = 0 num = 0 offset = floor(length(weights)/2) x_padded = c(rep(NA, 10), x, rep(NA,10)) # padding for sliding windown for (i in 1:length(x)) { x_slice = x_padded[(i+10-offset):(i+10+offset)] W = W + as.numeric(!is.na(x_slice)) %*% weights num = num + (x[i] - x_mean) * sum((x_slice - x_mean) * weights, na.rm = TRUE) } return(unname((N * num)/(as.numeric(W) * den))) } Next, I test the generated data sets to illustrate/test that my function is working correctly: # Test the 'perfect dispersion' scenario (should be -1) moran(seq3, c(1, 0, 1)) ## [1] -1 # Test the 'perfect clustering' scenario (should be ~1) moran(seq4, c(1, 0, 1)) ## [1] 0.979798 Now, let's look at OP's sequences: # Simple look at seq1. The weights test the idea that the current flip # is based purely on the last flip (a reasonable model for how a person might react) moran(seq1, c(1, 0, 0)) ## [1] -0.3647031 moran(seq2, c(1, 0, 0)) ## [1] -0.02359453 I'm defining my weights matrix such that only the previous flip is considered when testing for autocorrelation. We see that the second sequence is very close to 0 (random), whereas the first sequence seems to lean somewhat toward overdispersion. But maybe we think someone faking coin flips would consider the last two flips, not just the most recent: # Maybe the person is looking back at the last two flips moran(seq1, c(1, 1, 0, 0, 0)) ## [1] -0.1726056 moran(seq2, c(1, 1, 0, 0, 0)) ## [1] 0.0249505 The second sequence is just as close to 0 as before, but the first sequence had a pretty noticeable shift towards 0. This might be interpretable in a couple of different ways. First, if we know that the first sequence is fake, then maybe it means the person wasn't considering two flips back. A second interpretation is that maybe they were considering the last two flips, and somehow this led them to doing a better job at faking randomization. A third option might just be sheer dumb luck at faking the randomization. Now, maybe the person considers the last two coin flips but gives the most recent flip more importance. # Same idea, but maybe the more recent of the two is twice as important moran(seq1, c(1, 2, 0, 0, 0)) ## [1] -0.2367095 moran(seq2, c(1, 2, 0, 0, 0)) ## [1] 0.008750762 Here, we see the two sequences react differently. The second sequence (already pretty close to 0), gets noticeably closer to 0, whereas the first sequence shifts noticeably away. I'm not sure I want to try and interpret this, but it's an interesting result, and a similar thing happens if we try to model a scenario where the person is not only considering their previous flips but also thinking ahead to their next flip: # Maybe the person was thinking ahead to their next flip as well moran(seq1, c(1, 2, 0, 1, 0)) ## [1] -0.2687347 moran(seq2, c(1, 2, 0, 1, 0)) ## [1] 0.0006576715 Some of my application/interpretation of Moran's I to the coin flip problem might be a little off, but it's definitely an applicable measure to use. A related metric is Geary's C, which is more sensitive to local autocorrelation
How can I determine which of two sequences of coin flips is real and which is fake? This is probably an overcomplicated way of looking at it, but for me it's fun, so I present to you... Moran's I Now, Moran's I was developed to look at spatial autocorrelation (basically autocorrelati
3,764
How can I determine which of two sequences of coin flips is real and which is fake?
When people try to generate random sequences, they tend to avoid repeating themselves more than random processes avoid repeating themselves. Thus, if we look at consecutive pairs of flips, we would expect a human-generated sequence to have too many HT and TH and too few HH and TT compared to a typical random sequence. The code below explores this hypothesis. It splits each sequence of 300 flips into 150 consecutive pairs and plots the frequency of the four possible results (HH, HT, TH, TT).* library(tidyverse) a <- "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" b <- "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" # split each sequence of 300 into 150 consecutive pairs # e.g. TTHHTHTT... -> TT, HH, TH, TT, ... n_pairs <- 150 ap <- tibble(pair = character(n)) bp <- tibble(pair = character(n)) for (i in 1:n) { ap$pair[i] <- substring(a, 2*i - 1, 2*i) bp$pair[i] <- substring(b, 2*i - 1, 2*i) } # get the frequencies of each possible pair and plot apc <- count(ap, pair) bpc <- count(bp, pair) bind_rows( `Sequence 1` = apc, `Sequence 2` = bpc, .id = 'source') %>% ggplot(aes(x = pair, y = n, group = source, fill = source)) + geom_col() + facet_grid(vars(source)) + theme_minimal() + geom_hline(yintercept = n_pairs/4, linetype = 'dashed') + ylab('frequency') + ggtitle('Frequency of consecutive coin flip pairs') The dotted line at 150/4 = 37.5 is the expected count of each possible pair assuming the coin flips are independent and fair. By the Law of Large Numbers, we expect the bars not to stray too far from the dotted line. Sequence 1 has an above-average number of HT and TH pairs (especially HT), consistent with our hypothesis about human-generated "randomness". The pairs from Sequence 2 are more consistent with average behavior. To see how unusual this behavior would be under independent, fair flips, we reformat each sequence's pair count data as a 2x2 contingency table (rows = first flip H/T, columns = second flip H/T) and use Fisher's exact test, which checks whether the data is consistent with a null hypothesis in which the first flip is independent of the second: for (x in list(apc, bpc)) { print(x) x %>% mutate(f1 = str_sub(pair, 1, 1), f2 = str_sub(pair, 2, 2)) %>% select(f1, f2, n) %>% spread(f2, n) %>% select(-f1) %>% as.matrix() %>% fisher.test() %>% print() } The contingency table for Sequence 1 has a p value of 0.0002842, while the table for Sequence 2 has 0.5127. This means that pair frequencies skewed to the degree seen in Sequence 1 would only occur by random 1/0.0002842 = 1 out of 3,519 times, while something like Sequence 2 would be seen very commonly. It seems sensible to conclude that Sequence 1 is the human-made sequence, since its pair frequency table is not consistent with random chance but is quite consistent with the behavior we'd expect of humans. There is a caveat to this analysis: we do not expect random sequences to be perfectly consistent with average behavior. In fact, in some contexts people know that long random sequences should follow the Law of Large Numbers, and they create sequences which follow it too perfectly. A different analysis would be needed to explore whether Sequence 2 looks odd from this opposite perspective. * Other answers look at all 299 consecutive pairs, which gives you more data points but they become dependent, which prevents us from using standard significance tests. (For example in the sequence TTHHT, you can make pairs like this: (TT)HHT, T(TH)HT, TT(HH)T, .... This gives you more pairs but consecutive pairs are not independent of one another, as the second flip of a pair determines the first flip of the next pair.) Alternate Analysis An analysis that uses all 299 pairs of consecutive flips could be more powerful than the one above if the dependency problem can be solved. To do this, @whuber suggests looking at the transitions between consecutive flips, i.e. when H is followed by T or vice versa. If the flips are independent and fair, then after the first flip, each transition can be considered an independent Bernoulli random variable, and there are 299 transitions total. We can use a two-sided test to see whether the number of transitions observed in each sequence is unlikely under fair independent flips. The transitions are counted and the test applied by the code below: library(tidyverse) a <- "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" b <- "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" # 299 transitions from flip i to i+1 occur in the sequence of 300 # record these transitions in arrays at and bt n <- 299 at <- logical(n) bt <- logical(n) for (i in 1:n) { at[i] <- str_sub(a, i + 1, i + 1) != str_sub(a, i, i) bt[i] <- str_sub(b, i + 1, i + 1) != str_sub(b, i, i) } # two-sided exact binomial test (analogous to z-test) # gives probability of transition count more extreme than the one observed pbinom(sum(at) - 1, n, 1/2, lower.tail = F) + pbinom(n - sum(at), n, 1/2) pbinom(sum(bt) - 1, n, 1/2, lower.tail = F) + pbinom(n - sum(bt), n, 1/2) Running this code, we find that Sequence 1 has 204 transitions out of a possible 299. The probability of observing a number of transitions at least this imbalanced, on either the left or the right side, is equal to the probability of observing at least 204 transitions, plus the probability of observing at most 299 - 204 = 95 transitions. This probability is 2.696248e-10, on the order of 3 in 10 billion. Sequence 2 has 153 transitions, and the probability of observing at least 153 transitions or at most 299 - 153 = 146 transitions is 0.7286639. The number of transitions in Sequence 1 is extremely improbable, more so than the 150 pair test above suggested.
How can I determine which of two sequences of coin flips is real and which is fake?
When people try to generate random sequences, they tend to avoid repeating themselves more than random processes avoid repeating themselves. Thus, if we look at consecutive pairs of flips, we would ex
How can I determine which of two sequences of coin flips is real and which is fake? When people try to generate random sequences, they tend to avoid repeating themselves more than random processes avoid repeating themselves. Thus, if we look at consecutive pairs of flips, we would expect a human-generated sequence to have too many HT and TH and too few HH and TT compared to a typical random sequence. The code below explores this hypothesis. It splits each sequence of 300 flips into 150 consecutive pairs and plots the frequency of the four possible results (HH, HT, TH, TT).* library(tidyverse) a <- "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" b <- "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" # split each sequence of 300 into 150 consecutive pairs # e.g. TTHHTHTT... -> TT, HH, TH, TT, ... n_pairs <- 150 ap <- tibble(pair = character(n)) bp <- tibble(pair = character(n)) for (i in 1:n) { ap$pair[i] <- substring(a, 2*i - 1, 2*i) bp$pair[i] <- substring(b, 2*i - 1, 2*i) } # get the frequencies of each possible pair and plot apc <- count(ap, pair) bpc <- count(bp, pair) bind_rows( `Sequence 1` = apc, `Sequence 2` = bpc, .id = 'source') %>% ggplot(aes(x = pair, y = n, group = source, fill = source)) + geom_col() + facet_grid(vars(source)) + theme_minimal() + geom_hline(yintercept = n_pairs/4, linetype = 'dashed') + ylab('frequency') + ggtitle('Frequency of consecutive coin flip pairs') The dotted line at 150/4 = 37.5 is the expected count of each possible pair assuming the coin flips are independent and fair. By the Law of Large Numbers, we expect the bars not to stray too far from the dotted line. Sequence 1 has an above-average number of HT and TH pairs (especially HT), consistent with our hypothesis about human-generated "randomness". The pairs from Sequence 2 are more consistent with average behavior. To see how unusual this behavior would be under independent, fair flips, we reformat each sequence's pair count data as a 2x2 contingency table (rows = first flip H/T, columns = second flip H/T) and use Fisher's exact test, which checks whether the data is consistent with a null hypothesis in which the first flip is independent of the second: for (x in list(apc, bpc)) { print(x) x %>% mutate(f1 = str_sub(pair, 1, 1), f2 = str_sub(pair, 2, 2)) %>% select(f1, f2, n) %>% spread(f2, n) %>% select(-f1) %>% as.matrix() %>% fisher.test() %>% print() } The contingency table for Sequence 1 has a p value of 0.0002842, while the table for Sequence 2 has 0.5127. This means that pair frequencies skewed to the degree seen in Sequence 1 would only occur by random 1/0.0002842 = 1 out of 3,519 times, while something like Sequence 2 would be seen very commonly. It seems sensible to conclude that Sequence 1 is the human-made sequence, since its pair frequency table is not consistent with random chance but is quite consistent with the behavior we'd expect of humans. There is a caveat to this analysis: we do not expect random sequences to be perfectly consistent with average behavior. In fact, in some contexts people know that long random sequences should follow the Law of Large Numbers, and they create sequences which follow it too perfectly. A different analysis would be needed to explore whether Sequence 2 looks odd from this opposite perspective. * Other answers look at all 299 consecutive pairs, which gives you more data points but they become dependent, which prevents us from using standard significance tests. (For example in the sequence TTHHT, you can make pairs like this: (TT)HHT, T(TH)HT, TT(HH)T, .... This gives you more pairs but consecutive pairs are not independent of one another, as the second flip of a pair determines the first flip of the next pair.) Alternate Analysis An analysis that uses all 299 pairs of consecutive flips could be more powerful than the one above if the dependency problem can be solved. To do this, @whuber suggests looking at the transitions between consecutive flips, i.e. when H is followed by T or vice versa. If the flips are independent and fair, then after the first flip, each transition can be considered an independent Bernoulli random variable, and there are 299 transitions total. We can use a two-sided test to see whether the number of transitions observed in each sequence is unlikely under fair independent flips. The transitions are counted and the test applied by the code below: library(tidyverse) a <- "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" b <- "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" # 299 transitions from flip i to i+1 occur in the sequence of 300 # record these transitions in arrays at and bt n <- 299 at <- logical(n) bt <- logical(n) for (i in 1:n) { at[i] <- str_sub(a, i + 1, i + 1) != str_sub(a, i, i) bt[i] <- str_sub(b, i + 1, i + 1) != str_sub(b, i, i) } # two-sided exact binomial test (analogous to z-test) # gives probability of transition count more extreme than the one observed pbinom(sum(at) - 1, n, 1/2, lower.tail = F) + pbinom(n - sum(at), n, 1/2) pbinom(sum(bt) - 1, n, 1/2, lower.tail = F) + pbinom(n - sum(bt), n, 1/2) Running this code, we find that Sequence 1 has 204 transitions out of a possible 299. The probability of observing a number of transitions at least this imbalanced, on either the left or the right side, is equal to the probability of observing at least 204 transitions, plus the probability of observing at most 299 - 204 = 95 transitions. This probability is 2.696248e-10, on the order of 3 in 10 billion. Sequence 2 has 153 transitions, and the probability of observing at least 153 transitions or at most 299 - 153 = 146 transitions is 0.7286639. The number of transitions in Sequence 1 is extremely improbable, more so than the 150 pair test above suggested.
How can I determine which of two sequences of coin flips is real and which is fake? When people try to generate random sequences, they tend to avoid repeating themselves more than random processes avoid repeating themselves. Thus, if we look at consecutive pairs of flips, we would ex
3,765
How can I determine which of two sequences of coin flips is real and which is fake?
This answer is inspired by @user1717828's answer which transforms the sequence of coin flips into a random walk. I don't show the two given sequences as random walks here; see @user1717828's answer for that plot. The random walk approach is interesting because it examines long-run features of the sequence rather than short-range ones (such as the overabundance of H-T and T-H flips). In a way the two approaches are complementary as they analyze the sequences at different scales. At the "local" scale the fake sequence oscillates which creates autocorrelation; at the "global" scale it keeps within a limited range of positions for long stretches of time. Both of these aspects are non-random and as one occurs, so does the other: staying in place means not moving around and vice versa. A (one-dimensional) simple random walk $S_n$, defined as the sum of n random variables that take value {-1, 1} with probability ½, has many interesting properties, including: The probability that a simple random walk returns to the origin is 1. In fact, it visits every integer infinitely often. The mean of a simple random walk is $\text{E}(S_n) = 0$ and its variance is $\text{Var}(S_n) = n$. A random walk with nonzero mean (which corresponds to flipping a biased coin) is transient: it makes finitely many visits to the origin before diverging, to +∞ if the mean is positive and to -∞ in the mean is negative. These properties imply that a simple random walk spends a lot of time far away from 0 before eventually returning to it. We can use this intuition founded on theory (as well as any other property of random walks) to propose characteristics for comparing the two sequences. Then we generate many simple random walks and use the observed distribution of the features to estimate how "extreme" the two given sequences are. To capture the variance of a simple random walk I use the maximum distance from 0. To capture its tendency to visit every integer I use the number of times the walk crosses the integers from -8 to +8. To make a symmetric grid of scatterplots and to avoid suspicions of HARKing I don't show the crossings of 0. It's important to look at crossing on both sides of 0 (the starting point), so that we don't make assumptions about how the non-randomness manifests in the data. For example, we don't know whether the coin is fair (p = ½), biased towards heads (p > ½) or biased towards tails (p < ½). Note: HARKing is the practice of performing many analyses to find an interesting hypothesis. I compute many statistics, grounded in the theory of random walks, and report all of them, thus avoiding HARKing. Here are the results from the simulation of 500 simple random walks. Sequence #1 (in blue) appears as an outlier compared to sequence #2 (in red) in many panels. Due to its overabundance of H-T and T-H pairs, sequence #1 doesn't explore enough and spends too much time between -5 and -8 (shown in the top row). Sequence #1 doesn't advance beyond the band [-9, 9] and it is rare for a true random walk so stay so close to 0 for 300 steps. Visually, sequence #1 is the overall outlier even if it doesn't appear extremely "unusual" is some panels.
How can I determine which of two sequences of coin flips is real and which is fake?
This answer is inspired by @user1717828's answer which transforms the sequence of coin flips into a random walk. I don't show the two given sequences as random walks here; see @user1717828's answer fo
How can I determine which of two sequences of coin flips is real and which is fake? This answer is inspired by @user1717828's answer which transforms the sequence of coin flips into a random walk. I don't show the two given sequences as random walks here; see @user1717828's answer for that plot. The random walk approach is interesting because it examines long-run features of the sequence rather than short-range ones (such as the overabundance of H-T and T-H flips). In a way the two approaches are complementary as they analyze the sequences at different scales. At the "local" scale the fake sequence oscillates which creates autocorrelation; at the "global" scale it keeps within a limited range of positions for long stretches of time. Both of these aspects are non-random and as one occurs, so does the other: staying in place means not moving around and vice versa. A (one-dimensional) simple random walk $S_n$, defined as the sum of n random variables that take value {-1, 1} with probability ½, has many interesting properties, including: The probability that a simple random walk returns to the origin is 1. In fact, it visits every integer infinitely often. The mean of a simple random walk is $\text{E}(S_n) = 0$ and its variance is $\text{Var}(S_n) = n$. A random walk with nonzero mean (which corresponds to flipping a biased coin) is transient: it makes finitely many visits to the origin before diverging, to +∞ if the mean is positive and to -∞ in the mean is negative. These properties imply that a simple random walk spends a lot of time far away from 0 before eventually returning to it. We can use this intuition founded on theory (as well as any other property of random walks) to propose characteristics for comparing the two sequences. Then we generate many simple random walks and use the observed distribution of the features to estimate how "extreme" the two given sequences are. To capture the variance of a simple random walk I use the maximum distance from 0. To capture its tendency to visit every integer I use the number of times the walk crosses the integers from -8 to +8. To make a symmetric grid of scatterplots and to avoid suspicions of HARKing I don't show the crossings of 0. It's important to look at crossing on both sides of 0 (the starting point), so that we don't make assumptions about how the non-randomness manifests in the data. For example, we don't know whether the coin is fair (p = ½), biased towards heads (p > ½) or biased towards tails (p < ½). Note: HARKing is the practice of performing many analyses to find an interesting hypothesis. I compute many statistics, grounded in the theory of random walks, and report all of them, thus avoiding HARKing. Here are the results from the simulation of 500 simple random walks. Sequence #1 (in blue) appears as an outlier compared to sequence #2 (in red) in many panels. Due to its overabundance of H-T and T-H pairs, sequence #1 doesn't explore enough and spends too much time between -5 and -8 (shown in the top row). Sequence #1 doesn't advance beyond the band [-9, 9] and it is rare for a true random walk so stay so close to 0 for 300 steps. Visually, sequence #1 is the overall outlier even if it doesn't appear extremely "unusual" is some panels.
How can I determine which of two sequences of coin flips is real and which is fake? This answer is inspired by @user1717828's answer which transforms the sequence of coin flips into a random walk. I don't show the two given sequences as random walks here; see @user1717828's answer fo
3,766
How can I determine which of two sequences of coin flips is real and which is fake?
In addition to statistical approaches, one visual approach is to plot the sequences as a "drunkards walk". Treat H as a step forwards and T as a step back and plot the sequences. One way in Python is: import altair as alt import pandas as pd seq1 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" seq2 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" def encode(stream): return [1 if c == "H" else -1 for c in stream] df = pd.DataFrame({"seq1": encode(seq1), "seq2": encode(seq2)}) df_cumsum = pd.melt( df.cumsum(), var_name="sequence", value_name="cumsum", ignore_index=False ).reset_index() chart = alt.Chart(df_cumsum).mark_line().encode(x="index", y="cumsum", color="sequence") chart.show() When comparing a truly random sequence to a human generated one, it is often easy to tell them apart based on inspection of the walk.
How can I determine which of two sequences of coin flips is real and which is fake?
In addition to statistical approaches, one visual approach is to plot the sequences as a "drunkards walk". Treat H as a step forwards and T as a step back and plot the sequences. One way in Python i
How can I determine which of two sequences of coin flips is real and which is fake? In addition to statistical approaches, one visual approach is to plot the sequences as a "drunkards walk". Treat H as a step forwards and T as a step back and plot the sequences. One way in Python is: import altair as alt import pandas as pd seq1 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" seq2 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" def encode(stream): return [1 if c == "H" else -1 for c in stream] df = pd.DataFrame({"seq1": encode(seq1), "seq2": encode(seq2)}) df_cumsum = pd.melt( df.cumsum(), var_name="sequence", value_name="cumsum", ignore_index=False ).reset_index() chart = alt.Chart(df_cumsum).mark_line().encode(x="index", y="cumsum", color="sequence") chart.show() When comparing a truly random sequence to a human generated one, it is often easy to tell them apart based on inspection of the walk.
How can I determine which of two sequences of coin flips is real and which is fake? In addition to statistical approaches, one visual approach is to plot the sequences as a "drunkards walk". Treat H as a step forwards and T as a step back and plot the sequences. One way in Python i
3,767
How can I determine which of two sequences of coin flips is real and which is fake?
Looking at the HH, HT, TH, TT frequencies is probably the most straightforward way to approach the two series presented, given people's tendency to apply HT and TH more frequently when trying to appear random. More generally, however, that approach will fail to detect non-randomness even in sequences with obvious patterns. For instance, a repeating sequence of HHHTHTTT will produce balanced counts of pairs as well as triples (HHH, HHT, etc.). Here's an idea for a more general solution that was initially inspired by answers discussing random walks. Start with the observation that for a random sequence, the number of heads in any given subsequence is binomially distributed. We can count the number of heads, $n_{i,j}$, between flip number $i$ and flip number $j>i$ for all values of $i,j$. Then compare these to the expected number of counts if all the $n_{i,j}$ were independent. Although they are obviously not independent, the comparison gives rise to a useful statistic: the maximum absolute difference between the observed counts and the expected counts. Applying this approach to sequence 1 gives us 98.26 as our test statistic: there are 257 subsequences of length 44. If they were all composed of independent Bernoulli trials, the expected number of the 257 that contained exactly 22 heads is ~30.74, whereas sequence 1 contains 129 subsequences with exactly 22 heads (very underdispersed). 129 - 30.74 = 98.26, which is the maximum of these differences for sequence 1. Performing the same calculations on sequence 2 gives a test statistic of 48.30: there are 197 subsequences of length 104. The expected number containing exactly 54 heads would be ~13.70. Sequence 2 contains 62, so the test statistic is 62 - 13.70 = 48.30. The test statistics can be compared to those from a large number of random sequences of the same size. In this case, no samples are greater than the test statistic from sequence 1, and about 14% of samples are greater than the statistic from sequence 2. Here it is all together with R code: library(parallel) set.seed(16) seq1 <- utf8ToInt("TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT") == utf8ToInt("H") seq2 <- utf8ToInt("HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT") == utf8ToInt("H") # function to calculate test statistic S fS <- function(m, p = 0.5) { if (!is.matrix(m)) m <- matrix(m, ncol = 1) n <- nrow(m) steps <- rep.int(1:(n - 1L), 2:n) nEx <- dbinom(sequence(2:n, from = 0L), steps, p)*(n - steps) idx <- sequence((n - 1L):1) idx <- idx*(idx + 1L)/2L S <- numeric(ncol(m)) for (i in 1:ncol(m)) S[i] <- max(abs(nEx - tabulate(idx + dist(cumsum(m[,i])), length(nEx)))) S } (S1 <- fS(seq1)) #> [1] 98.26173 (S2 <- fS(seq2)) #> [1] 48.30014 # calculate S from 1e6 random sequences of length n (probably overkill) n <- length(seq1) cl <- makeCluster(detectCores() - 1L) clusterExport(cl, list("fS", "n")) system.time(simS <- unlist(parLapply(cl, 1:100, function(i) fS(matrix(sample(0:1, 1e4L*n, TRUE), n))))) #> user system elapsed #> 0.00 0.03 339.42 stopCluster(cl) nsim <- 1e6L # calculate approximate p-values sum(simS > S1)/nsim #> [1] 0 sum(simS > S2)/nsim #> [1] 0.139855 max(simS) #> [1] 91.01078
How can I determine which of two sequences of coin flips is real and which is fake?
Looking at the HH, HT, TH, TT frequencies is probably the most straightforward way to approach the two series presented, given people's tendency to apply HT and TH more frequently when trying to appea
How can I determine which of two sequences of coin flips is real and which is fake? Looking at the HH, HT, TH, TT frequencies is probably the most straightforward way to approach the two series presented, given people's tendency to apply HT and TH more frequently when trying to appear random. More generally, however, that approach will fail to detect non-randomness even in sequences with obvious patterns. For instance, a repeating sequence of HHHTHTTT will produce balanced counts of pairs as well as triples (HHH, HHT, etc.). Here's an idea for a more general solution that was initially inspired by answers discussing random walks. Start with the observation that for a random sequence, the number of heads in any given subsequence is binomially distributed. We can count the number of heads, $n_{i,j}$, between flip number $i$ and flip number $j>i$ for all values of $i,j$. Then compare these to the expected number of counts if all the $n_{i,j}$ were independent. Although they are obviously not independent, the comparison gives rise to a useful statistic: the maximum absolute difference between the observed counts and the expected counts. Applying this approach to sequence 1 gives us 98.26 as our test statistic: there are 257 subsequences of length 44. If they were all composed of independent Bernoulli trials, the expected number of the 257 that contained exactly 22 heads is ~30.74, whereas sequence 1 contains 129 subsequences with exactly 22 heads (very underdispersed). 129 - 30.74 = 98.26, which is the maximum of these differences for sequence 1. Performing the same calculations on sequence 2 gives a test statistic of 48.30: there are 197 subsequences of length 104. The expected number containing exactly 54 heads would be ~13.70. Sequence 2 contains 62, so the test statistic is 62 - 13.70 = 48.30. The test statistics can be compared to those from a large number of random sequences of the same size. In this case, no samples are greater than the test statistic from sequence 1, and about 14% of samples are greater than the statistic from sequence 2. Here it is all together with R code: library(parallel) set.seed(16) seq1 <- utf8ToInt("TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT") == utf8ToInt("H") seq2 <- utf8ToInt("HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT") == utf8ToInt("H") # function to calculate test statistic S fS <- function(m, p = 0.5) { if (!is.matrix(m)) m <- matrix(m, ncol = 1) n <- nrow(m) steps <- rep.int(1:(n - 1L), 2:n) nEx <- dbinom(sequence(2:n, from = 0L), steps, p)*(n - steps) idx <- sequence((n - 1L):1) idx <- idx*(idx + 1L)/2L S <- numeric(ncol(m)) for (i in 1:ncol(m)) S[i] <- max(abs(nEx - tabulate(idx + dist(cumsum(m[,i])), length(nEx)))) S } (S1 <- fS(seq1)) #> [1] 98.26173 (S2 <- fS(seq2)) #> [1] 48.30014 # calculate S from 1e6 random sequences of length n (probably overkill) n <- length(seq1) cl <- makeCluster(detectCores() - 1L) clusterExport(cl, list("fS", "n")) system.time(simS <- unlist(parLapply(cl, 1:100, function(i) fS(matrix(sample(0:1, 1e4L*n, TRUE), n))))) #> user system elapsed #> 0.00 0.03 339.42 stopCluster(cl) nsim <- 1e6L # calculate approximate p-values sum(simS > S1)/nsim #> [1] 0 sum(simS > S2)/nsim #> [1] 0.139855 max(simS) #> [1] 91.01078
How can I determine which of two sequences of coin flips is real and which is fake? Looking at the HH, HT, TH, TT frequencies is probably the most straightforward way to approach the two series presented, given people's tendency to apply HT and TH more frequently when trying to appea
3,768
How can I determine which of two sequences of coin flips is real and which is fake?
This problem specifies that we have the following information: We observed two coin flip sequences: sequence $S_1$ and sequence $S_2$. Each of these sequences could have been generated by either mechanism $R$ corresponding to independent flips of a fair coin or some other mechanism $\bar{R}$. Exactly one of these two sequences was generated by the mechanism $R$. Since both sequences $S_1$ and $S_2$ could possibly have been generated by $R$ or $\bar{R}$, we cannot be sure which of these two sequences was generated by $R$. However, probability theory provides us with the tools to quantify our belief $P(R_i|S_i)$ that sequence $S_i$ was generated by mechanism $R$. Then, given that we have been provided with the information $I$ that exactly one of these two sequences has been generated by mechanism $R$, then the probability that sequence 1 was generated by mechanism $\bar{R}$ and sequence 2 was generated by mechanism $R$ is: $$ P(\bar{R_1}R_2|S_1 S_2 I) = \frac{P(R_2|S_2)}{P(R_1|S_1) + P(R_2|S_2)} $$ This problem explicitly specifies what it means for a sequence to have been generated by mechanism $R$: the sequence $S_i$ is described by independent flips $y_{ij}$ of a fair coin, such that the likelihood is: $$ P(S_i|R_i) = \prod_j \mathrm{Bernoulli}(y_{ij} | 0.5) $$ However, in order to determine the probability that a given sequence was generated by mechanism $R$, $P(R_i|S_i)$, we also need to specify alternative mechanisms $\bar{R}$ by which the sequences may have been generated. At this point we have to use our own creativity and external information to develop models that could reasonably describe how these sequences might have been generated, if they were not generated by mechanism $R$. The other answers to this question do a great job at describing various mechanisms $\bar{R}$ that describe how sequences could be generated: Sequences are generated by sampling pairs of coin flips from a distribution where the probability of different pairs of coin flips deviates from uniform: 1 2 3 Sequences are generated in a way such that they do not have long runs of only heads or tails: 1 Sequences are generated in such a way so that a given compression algorithm results in a small compressed size: 1 Sequences are generated by a random walk such that a given coin flip outcome is affected by recent coin flip outcomes: 1 2 3 Note: we can always develop a hypothesis that appears to result in our observed sequence having been "inevitable", according to our likelihood, and if our prior probability for that mechanism of sequence generation is high enough, then using this hypothesis can always result in a low probability that the sequence had been generated by mechanism $R$. This practice is also referred to as HARKing, and Jaynes refers to this as a "sure-thing hypothesis". In general, we may have multiple hypotheses that can describe how a sequence may be generated, but the more contrived hypotheses such as "sure-thing hypotheses" should generally have sufficiently low prior probabilities such that we generally would not infer that those hypotheses have the highest probability of having generated a given sequence. Here's an example using this procedure with one possible model for $\bar{R}$ sequence generation to quantify the probability that sequence 2 was the sequence that was generated by mechanism $R$. This stan model defines model_r_lpmf as the log likelihood for sequence generation mechanism $R$, and model_not_r_lpmf as the log likelihood for sequence generation mechanism $\bar{R}$. In this case, we are testing a hypothesis $\bar{R}$ in which coin flips can be correlated, and there is a fairly well defined correlation length. Then, we are modeling the probability that the sequence was generated by model $R$ as p_model_r, such that our prior probability considers mechanisms $R$ and $\bar{R}$ sequence generation equally probable. // model for hypothesis testing whether a sequence of coin flips is generated by either: // - model R: independent flips of a fair coin, or // - model not R: unfair coin with correlations functions { // independent flips of a fair coin real model_r_lpmf(int[] sequence, int N) { return bernoulli_lpmf(sequence | 0.5); } // unfair coin with correlations real model_not_r_lpmf(int[] sequence, int N, int n, vector beta) { real tmp = 0; real offset = 0; for (i in 1:N) { offset = beta[1]; for (j in 1:min(n - 1, i - 1)) { offset += beta[j + 1] * (2 * sequence[i - j] - 1); } tmp += bernoulli_logit_lpmf(sequence[i] | offset); } return tmp; } } data { int N; // the length of the observed sequence int<lower=0, upper=1> sequence[N]; // the observed sequence } transformed data { int n = N / 2 + 1; // number of correlation parameters to model } parameters { real<lower=0, upper=1> p_model_r; // probability that the sequence was generated by model R // not R model parameters real log_scale; // log of the scale of the correlation coefficients `beta` real log_decay_rate; // log of the decay rate for the correlation coefficients `beta` vector[n] alpha; // pretransformed correlation coefficients } transformed parameters { // transformed not R model parameters real scale = exp(log_scale); // typical scale of the correlation coefficients real decay_rate = exp(log_decay_rate); // typical decay rate of correlation coefficients vector[n] beta; // correlation coefficients for (j in 1:n) { beta[j] = alpha[j] * scale * exp(-(j - 1) * decay_rate); } } model { // uninformative haldane prior for the probability that the sequence was generated by model R target += - log(p_model_r) - log1m(p_model_r); // priors for not R model parameters log_scale ~ normal(-1, 1); log_decay_rate ~ normal(-1, log(n + 1) / 2.0); alpha ~ normal(0, 1); // the sequence is was generated by model R with probability `p_model_r`, otherwise it was generated by model not R target += log_mix( p_model_r, model_r_lpmf(sequence | N), model_not_r_lpmf(sequence | N, n, beta) ); } Using pystan with this model, we can evaluate the probability p_model_r that each of the two provided sequences were generated by mechanism $R$. model = pystan.StanModel(model_code=MODEL_CODE) sequence_1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" samples_1 = model.sampling( data=dict(N=len(sequence_1), sequence=[int(c == "H") for c in sequence_1]), chains=16, ) p_model_r_1 = samples_1["p_model_r"].mean() # p_model_r_1 = 0.027 sequence_2 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" samples_2 = model.sampling( data=dict(N=len(sequence_2), sequence=[int(c == "H") for c in sequence_2]), chains=16, ) p_model_r_2 = samples_2["p_model_r"].mean() # p_model_r_2 = 0.790 Here we have quantified the probability that each sequence was generated according to the sequence generating model $R$ as the following: $$ \begin{aligned} P(R_1|S_1) &= 0.027 \\ P(R_2|S_2) &= 0.790 \\ P(\bar{R_1}R_2|S_1 S_2 I) &= 0.967 \\ \end{aligned} $$ So, we are fairly certain that sequence 2 was generated by independent coin flips and sequence 1 was not, relative to the model of sequence generation $\bar{R}$ that we defined, and we quantify this belief with a probability of 0.967. This is a high probability, but of course we could still be wrong -- both sequences could have been generated by either model. Additionally, we could have selected a model of sequence generation for $\bar{R}$ that can not describe the procedure that was actually used to generate the sequences. This process for quantifying our belief that one of these two sequences was generated by a fair coin with independent flips is the piece of reasoning that currently appears to be missing from the other answers that are currently available.
How can I determine which of two sequences of coin flips is real and which is fake?
This problem specifies that we have the following information: We observed two coin flip sequences: sequence $S_1$ and sequence $S_2$. Each of these sequences could have been generated by either mech
How can I determine which of two sequences of coin flips is real and which is fake? This problem specifies that we have the following information: We observed two coin flip sequences: sequence $S_1$ and sequence $S_2$. Each of these sequences could have been generated by either mechanism $R$ corresponding to independent flips of a fair coin or some other mechanism $\bar{R}$. Exactly one of these two sequences was generated by the mechanism $R$. Since both sequences $S_1$ and $S_2$ could possibly have been generated by $R$ or $\bar{R}$, we cannot be sure which of these two sequences was generated by $R$. However, probability theory provides us with the tools to quantify our belief $P(R_i|S_i)$ that sequence $S_i$ was generated by mechanism $R$. Then, given that we have been provided with the information $I$ that exactly one of these two sequences has been generated by mechanism $R$, then the probability that sequence 1 was generated by mechanism $\bar{R}$ and sequence 2 was generated by mechanism $R$ is: $$ P(\bar{R_1}R_2|S_1 S_2 I) = \frac{P(R_2|S_2)}{P(R_1|S_1) + P(R_2|S_2)} $$ This problem explicitly specifies what it means for a sequence to have been generated by mechanism $R$: the sequence $S_i$ is described by independent flips $y_{ij}$ of a fair coin, such that the likelihood is: $$ P(S_i|R_i) = \prod_j \mathrm{Bernoulli}(y_{ij} | 0.5) $$ However, in order to determine the probability that a given sequence was generated by mechanism $R$, $P(R_i|S_i)$, we also need to specify alternative mechanisms $\bar{R}$ by which the sequences may have been generated. At this point we have to use our own creativity and external information to develop models that could reasonably describe how these sequences might have been generated, if they were not generated by mechanism $R$. The other answers to this question do a great job at describing various mechanisms $\bar{R}$ that describe how sequences could be generated: Sequences are generated by sampling pairs of coin flips from a distribution where the probability of different pairs of coin flips deviates from uniform: 1 2 3 Sequences are generated in a way such that they do not have long runs of only heads or tails: 1 Sequences are generated in such a way so that a given compression algorithm results in a small compressed size: 1 Sequences are generated by a random walk such that a given coin flip outcome is affected by recent coin flip outcomes: 1 2 3 Note: we can always develop a hypothesis that appears to result in our observed sequence having been "inevitable", according to our likelihood, and if our prior probability for that mechanism of sequence generation is high enough, then using this hypothesis can always result in a low probability that the sequence had been generated by mechanism $R$. This practice is also referred to as HARKing, and Jaynes refers to this as a "sure-thing hypothesis". In general, we may have multiple hypotheses that can describe how a sequence may be generated, but the more contrived hypotheses such as "sure-thing hypotheses" should generally have sufficiently low prior probabilities such that we generally would not infer that those hypotheses have the highest probability of having generated a given sequence. Here's an example using this procedure with one possible model for $\bar{R}$ sequence generation to quantify the probability that sequence 2 was the sequence that was generated by mechanism $R$. This stan model defines model_r_lpmf as the log likelihood for sequence generation mechanism $R$, and model_not_r_lpmf as the log likelihood for sequence generation mechanism $\bar{R}$. In this case, we are testing a hypothesis $\bar{R}$ in which coin flips can be correlated, and there is a fairly well defined correlation length. Then, we are modeling the probability that the sequence was generated by model $R$ as p_model_r, such that our prior probability considers mechanisms $R$ and $\bar{R}$ sequence generation equally probable. // model for hypothesis testing whether a sequence of coin flips is generated by either: // - model R: independent flips of a fair coin, or // - model not R: unfair coin with correlations functions { // independent flips of a fair coin real model_r_lpmf(int[] sequence, int N) { return bernoulli_lpmf(sequence | 0.5); } // unfair coin with correlations real model_not_r_lpmf(int[] sequence, int N, int n, vector beta) { real tmp = 0; real offset = 0; for (i in 1:N) { offset = beta[1]; for (j in 1:min(n - 1, i - 1)) { offset += beta[j + 1] * (2 * sequence[i - j] - 1); } tmp += bernoulli_logit_lpmf(sequence[i] | offset); } return tmp; } } data { int N; // the length of the observed sequence int<lower=0, upper=1> sequence[N]; // the observed sequence } transformed data { int n = N / 2 + 1; // number of correlation parameters to model } parameters { real<lower=0, upper=1> p_model_r; // probability that the sequence was generated by model R // not R model parameters real log_scale; // log of the scale of the correlation coefficients `beta` real log_decay_rate; // log of the decay rate for the correlation coefficients `beta` vector[n] alpha; // pretransformed correlation coefficients } transformed parameters { // transformed not R model parameters real scale = exp(log_scale); // typical scale of the correlation coefficients real decay_rate = exp(log_decay_rate); // typical decay rate of correlation coefficients vector[n] beta; // correlation coefficients for (j in 1:n) { beta[j] = alpha[j] * scale * exp(-(j - 1) * decay_rate); } } model { // uninformative haldane prior for the probability that the sequence was generated by model R target += - log(p_model_r) - log1m(p_model_r); // priors for not R model parameters log_scale ~ normal(-1, 1); log_decay_rate ~ normal(-1, log(n + 1) / 2.0); alpha ~ normal(0, 1); // the sequence is was generated by model R with probability `p_model_r`, otherwise it was generated by model not R target += log_mix( p_model_r, model_r_lpmf(sequence | N), model_not_r_lpmf(sequence | N, n, beta) ); } Using pystan with this model, we can evaluate the probability p_model_r that each of the two provided sequences were generated by mechanism $R$. model = pystan.StanModel(model_code=MODEL_CODE) sequence_1 = "TTHHTHTTHTTTHTTTHTTTHTTHTHHTHHTHTHHTTTHHTHTHTTHTHHTTHTHHTHTTTHHTTHHTTHHHTHHTHTTHTHTTHHTHHHTTHTHTTTHHTTHTHTHTHTHTTHTHTHHHTTHTHTHHTHHHTHTHTTHTTHHTHTHTHTTHHTTHTHTTHHHTHTHTHTTHTTHHTTHTHHTHHHTTHHTHTTHTHTHTHTHTHTHHHTHTHTHTHHTHHTHTHTTHTTTHHTHTTTHTHHTHHHHTTTHHTHTHTHTHHHTTHHTHTTTHTHHTHTHTHHTHTTHTTHTHHTHTHTTT" samples_1 = model.sampling( data=dict(N=len(sequence_1), sequence=[int(c == "H") for c in sequence_1]), chains=16, ) p_model_r_1 = samples_1["p_model_r"].mean() # p_model_r_1 = 0.027 sequence_2 = "HTHHHTHTTHHTTTTTTTTHHHTTTHHTTTTHHTTHHHTTHTHTTTTTTHTHTTTTHHHHTHTHTTHTTTHTTHTTTTHTHHTHHHHTTTTTHHHHTHHHTTTTHTHTTHHHHTHHHHHHHHTTHHTHHTHHHHHHHTTHTHTTTHHTTTTHTHHTTHTTHTHTHTTHHHHHTTHTTTHTHTHHTTTTHTTTTTHHTHTHHHHTTTTHTHHHTHHTHTHTHTHHHTHTTHHHTHHHHHHTHHHTHTTTHHHTTTHHTHTTHHTHHHTHTTHTTHTTTHHTHTHTTTTHTHTHTTHTHTHT" samples_2 = model.sampling( data=dict(N=len(sequence_2), sequence=[int(c == "H") for c in sequence_2]), chains=16, ) p_model_r_2 = samples_2["p_model_r"].mean() # p_model_r_2 = 0.790 Here we have quantified the probability that each sequence was generated according to the sequence generating model $R$ as the following: $$ \begin{aligned} P(R_1|S_1) &= 0.027 \\ P(R_2|S_2) &= 0.790 \\ P(\bar{R_1}R_2|S_1 S_2 I) &= 0.967 \\ \end{aligned} $$ So, we are fairly certain that sequence 2 was generated by independent coin flips and sequence 1 was not, relative to the model of sequence generation $\bar{R}$ that we defined, and we quantify this belief with a probability of 0.967. This is a high probability, but of course we could still be wrong -- both sequences could have been generated by either model. Additionally, we could have selected a model of sequence generation for $\bar{R}$ that can not describe the procedure that was actually used to generate the sequences. This process for quantifying our belief that one of these two sequences was generated by a fair coin with independent flips is the piece of reasoning that currently appears to be missing from the other answers that are currently available.
How can I determine which of two sequences of coin flips is real and which is fake? This problem specifies that we have the following information: We observed two coin flip sequences: sequence $S_1$ and sequence $S_2$. Each of these sequences could have been generated by either mech
3,769
Why is the sum of two random variables a convolution?
Convolution calculations associated with distributions of random variables are all mathematical manifestations of the Law of Total Probability. In the language of my post at What is meant by a “random variable”?, A pair of random variables $(X,Y)$ consists of a box of tickets on each of which are written two numbers, one designated $X$ and the other $Y$. The sum of these random variables is obtained by adding the two numbers found on each ticket. I posted a picture of such a box and its tickets at Clarifying the concept of sum of random variables. This computation literally is a task you could assign to a third-grade classroom. (I make this point to emphasize both the fundamental simplicity of the operation as well as showing how strongly it is connected with what everybody understands a "sum" to mean.) How the sum of random variables is expressed mathematically depends on how you represent the contents of the box: In terms of probability mass functions (pmf) or probability density functions (pdf), it is the operation of convolution. In terms of moment generating functions (mgf), it is the (elementwise) product. In terms of (cumulative) distribution functions (cdf), it is an operation closely related to the convolution. (See the references.) In terms of characteristic functions (cf) it is the (elementwise) product. In terms of cumulant generating functions (cgf) it is the sum. The first two of these are special insofar as the box might not have a pmf, pdf, or mgf, but it always has a cdf, cf, and cgf. To see why convolution is the appropriate method to compute the pmf or pdf of a sum of random variables, consider the case where all three variables $X,$ $Y,$ and $X+Y$ have a pmf: by definition, the pmf for $X+Y$ at any number $z$ gives the proportion of tickets in the box where the sum $X+Y$ equals $z,$ written $\Pr(X+Y=z).$ The pmf of the sum is found by breaking down the set of tickets according to the value of $X$ written on them, following the Law of Total Probability, which asserts proportions (of disjoint subsets) add. More technically, The proportion of tickets found within a collection of disjoint subsets of the box is the sum of the proportions of the individual subsets. It is applied thus: The proportion of tickets where $X+Y=z$, written $\Pr(X+Y=z),$ must equal the sum over all possible values $x$ of the proportion of tickets where $X=x$ and $X+Y=z,$ written $\Pr(X=x, X+Y=z).$ Because $X=x$ and $X+Y=z$ imply $Y=z-x,$ this expression can be rewritten directly in terms of the original variables $X$ and $Y$ as $$\Pr(X+Y=z) = \sum_x \Pr(X=x, Y=z-x).$$ That's the convolution. Edit Please note that although convolutions are associated with sums of random variables, the convolutions are not convolutions of the random variables themselves! Indeed, in most cases it is not possible to convolve two random variables. For this to work, their domains have to have additional mathematical structure. This structure is a continuous topological group. Without getting into details, suffice it to say that convolution of any two functions $X, Y:G \to H$ must abstractly look something like $$(X\star Y)(g) = \sum_{h,k\in G\mid h+k=g} X(h)Y(k).$$ (The sum could be an integral and, if this is going to produce new random variables from existing ones, $X\star Y$ must be measurable whenever $X$ and $Y$ are; that's where some consideration of topology or measurability must come in.) This formula invokes two operations. One is the multiplication on $H:$ it must make sense to multiply values $X(h)\in H$ and $Y(k)\in H.$ The other is the addition on $G:$ it must make sense to add elements of $G.$ In most probability applications, $H$ is a set of numbers (real or complex) and multiplication is the usual one. But $G,$ the sample space, often has no mathematical structure at all. That's why the convolution of random variables is usually not even defined. The objects involved in convolutions in this thread are mathematical representations of the distributions of random variables. They are used to compute the distribution of a sum of random variables, given the joint distribution of those random variables. References Stuart and Ord, Kendall's Advanced Theory of Statistics, Volume 1. Fifth Edition, 1987, Chapters 1, 3, and 4 (Frequency Distributions, Moments and Cumulants, and Characteristic Functions).
Why is the sum of two random variables a convolution?
Convolution calculations associated with distributions of random variables are all mathematical manifestations of the Law of Total Probability. In the language of my post at What is meant by a “rando
Why is the sum of two random variables a convolution? Convolution calculations associated with distributions of random variables are all mathematical manifestations of the Law of Total Probability. In the language of my post at What is meant by a “random variable”?, A pair of random variables $(X,Y)$ consists of a box of tickets on each of which are written two numbers, one designated $X$ and the other $Y$. The sum of these random variables is obtained by adding the two numbers found on each ticket. I posted a picture of such a box and its tickets at Clarifying the concept of sum of random variables. This computation literally is a task you could assign to a third-grade classroom. (I make this point to emphasize both the fundamental simplicity of the operation as well as showing how strongly it is connected with what everybody understands a "sum" to mean.) How the sum of random variables is expressed mathematically depends on how you represent the contents of the box: In terms of probability mass functions (pmf) or probability density functions (pdf), it is the operation of convolution. In terms of moment generating functions (mgf), it is the (elementwise) product. In terms of (cumulative) distribution functions (cdf), it is an operation closely related to the convolution. (See the references.) In terms of characteristic functions (cf) it is the (elementwise) product. In terms of cumulant generating functions (cgf) it is the sum. The first two of these are special insofar as the box might not have a pmf, pdf, or mgf, but it always has a cdf, cf, and cgf. To see why convolution is the appropriate method to compute the pmf or pdf of a sum of random variables, consider the case where all three variables $X,$ $Y,$ and $X+Y$ have a pmf: by definition, the pmf for $X+Y$ at any number $z$ gives the proportion of tickets in the box where the sum $X+Y$ equals $z,$ written $\Pr(X+Y=z).$ The pmf of the sum is found by breaking down the set of tickets according to the value of $X$ written on them, following the Law of Total Probability, which asserts proportions (of disjoint subsets) add. More technically, The proportion of tickets found within a collection of disjoint subsets of the box is the sum of the proportions of the individual subsets. It is applied thus: The proportion of tickets where $X+Y=z$, written $\Pr(X+Y=z),$ must equal the sum over all possible values $x$ of the proportion of tickets where $X=x$ and $X+Y=z,$ written $\Pr(X=x, X+Y=z).$ Because $X=x$ and $X+Y=z$ imply $Y=z-x,$ this expression can be rewritten directly in terms of the original variables $X$ and $Y$ as $$\Pr(X+Y=z) = \sum_x \Pr(X=x, Y=z-x).$$ That's the convolution. Edit Please note that although convolutions are associated with sums of random variables, the convolutions are not convolutions of the random variables themselves! Indeed, in most cases it is not possible to convolve two random variables. For this to work, their domains have to have additional mathematical structure. This structure is a continuous topological group. Without getting into details, suffice it to say that convolution of any two functions $X, Y:G \to H$ must abstractly look something like $$(X\star Y)(g) = \sum_{h,k\in G\mid h+k=g} X(h)Y(k).$$ (The sum could be an integral and, if this is going to produce new random variables from existing ones, $X\star Y$ must be measurable whenever $X$ and $Y$ are; that's where some consideration of topology or measurability must come in.) This formula invokes two operations. One is the multiplication on $H:$ it must make sense to multiply values $X(h)\in H$ and $Y(k)\in H.$ The other is the addition on $G:$ it must make sense to add elements of $G.$ In most probability applications, $H$ is a set of numbers (real or complex) and multiplication is the usual one. But $G,$ the sample space, often has no mathematical structure at all. That's why the convolution of random variables is usually not even defined. The objects involved in convolutions in this thread are mathematical representations of the distributions of random variables. They are used to compute the distribution of a sum of random variables, given the joint distribution of those random variables. References Stuart and Ord, Kendall's Advanced Theory of Statistics, Volume 1. Fifth Edition, 1987, Chapters 1, 3, and 4 (Frequency Distributions, Moments and Cumulants, and Characteristic Functions).
Why is the sum of two random variables a convolution? Convolution calculations associated with distributions of random variables are all mathematical manifestations of the Law of Total Probability. In the language of my post at What is meant by a “rando
3,770
Why is the sum of two random variables a convolution?
Notation, upper and lower case https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics Random variables are usually written in upper case roman letters: $X$, $Y$, etc. Particular realizations of a random variable are written in corresponding lower case letters. For example $x_1$, $x_2$, …, $x_n$ could be a sample corresponding to the random variable $X$ and a cumulative probability is formally written $P ( X > x )$ to differentiate random variable from realization. $Z=X+Y$ means $z_i=x_i+y_i \qquad \forall x_i,y_i$ Mixture of variables $ \rightarrow $ sum of pdf's https://en.wikipedia.org/wiki/Mixture_distribution You use a sum of the probability density functions $f_{X_1}$ and $f_{X_2}$ when the probability (of say Z) is a defined by a single sum of different probabilities. For example when $Z$ is a fraction $s$ of the time defined by $X_1$ and a fraction $1-s$ of the time defined by $X_2$, then you get $$\mathbb{P}(Z=z) = s \mathbb{P}(X_1=z) + (1-s) \mathbb{P}(X_2=z)$$ and $$f_Z(z) = s f_{X_1}(z) + (1-s) f_{X_2}(z)$$ . . . . an example is a choice between dice rolls with either a 6 sided dice or a 12 sided dice. Say you do 50-50 percent of the time the one dice or the other. Then $$f_{mixed roll}(z) = 0.5 \, f_{6-sided}(z) + 0.5 \, f_{12-sided}(z)$$ Sum of variables $ \rightarrow $ convolution of pdf's https://en.wikipedia.org/wiki/Convolution_of_probability_distributions You use a convolution of the probability density functions $f_{X_1}$ and $f_{X_2}$ when the probability (of say Z) is a defined by multiple sums of different (independent) probabilities. For example when $Z = X_1 + X_2$ (ie. a sum!) and multiple different pairs $x_1,x_2$ sum up to $z$, with each the probability $f_{X_1}(x_1)f_{X_2}(x_2)$. Then you get the convolution $$\mathbb{P}(Z=z) = \sum_{\text{all pairs }x_1+x_2=z} \mathbb{P}(X_1=x_1) \cdot \mathbb{P}(X_2=x_2)$$ and $$f_Z(z) = \sum_{x_1 \in \text{ domain of }X_1} f_{X_1}(x_1) f_{X_2}(z-x_1)$$ or for continuous variables $$f_Z(z) = \int_{x_1 \in \text{ domain of }X_1} f_{X_1}(x_1) f_{X_2}(z-x_1) d x_1$$ . . . . an example is a sum of two dice rolls $f_{X_2}(x) = f_{X_1}(x) = 1/6$ for $x \in \lbrace 1,2,3,4,5,6 \rbrace$ and $$f_Z(z) = \sum_{x \in \lbrace 1,2,3,4,5,6 \rbrace \\ \text{ and } z-x \in \lbrace 1,2,3,4,5,6 \rbrace} f_{X_1}(x) f_{X_2}(z-x)$$ note I choose to integrate and sum $x_1 \in \text{ domain of } X_1$, which I find more intuitive, but it is not necessary and you can integrate from $-\infty$ to $\infty$ if you define $f_{X_1}(x_1)=0$ outside the domain. Image example Let $Z$ be $X+Y$. To know $\mathbb{P}(z-\frac{1}{2}dz<Z<z+\frac{1}{2}dz)$ you will have to integrate over the probabilities for all the realizations of $x,y$ that lead to $z-\frac{1}{2}dz<Z=X+Y<z+\frac{1}{2}dz$. So that is the integral of $f(x)g(y)$ in the region $\pm \frac{1}{2}dz$ along the line $x+y=z$.
Why is the sum of two random variables a convolution?
Notation, upper and lower case https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics Random variables are usually written in upper case roman letters: $X$, $Y$, etc. Particular realiz
Why is the sum of two random variables a convolution? Notation, upper and lower case https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics Random variables are usually written in upper case roman letters: $X$, $Y$, etc. Particular realizations of a random variable are written in corresponding lower case letters. For example $x_1$, $x_2$, …, $x_n$ could be a sample corresponding to the random variable $X$ and a cumulative probability is formally written $P ( X > x )$ to differentiate random variable from realization. $Z=X+Y$ means $z_i=x_i+y_i \qquad \forall x_i,y_i$ Mixture of variables $ \rightarrow $ sum of pdf's https://en.wikipedia.org/wiki/Mixture_distribution You use a sum of the probability density functions $f_{X_1}$ and $f_{X_2}$ when the probability (of say Z) is a defined by a single sum of different probabilities. For example when $Z$ is a fraction $s$ of the time defined by $X_1$ and a fraction $1-s$ of the time defined by $X_2$, then you get $$\mathbb{P}(Z=z) = s \mathbb{P}(X_1=z) + (1-s) \mathbb{P}(X_2=z)$$ and $$f_Z(z) = s f_{X_1}(z) + (1-s) f_{X_2}(z)$$ . . . . an example is a choice between dice rolls with either a 6 sided dice or a 12 sided dice. Say you do 50-50 percent of the time the one dice or the other. Then $$f_{mixed roll}(z) = 0.5 \, f_{6-sided}(z) + 0.5 \, f_{12-sided}(z)$$ Sum of variables $ \rightarrow $ convolution of pdf's https://en.wikipedia.org/wiki/Convolution_of_probability_distributions You use a convolution of the probability density functions $f_{X_1}$ and $f_{X_2}$ when the probability (of say Z) is a defined by multiple sums of different (independent) probabilities. For example when $Z = X_1 + X_2$ (ie. a sum!) and multiple different pairs $x_1,x_2$ sum up to $z$, with each the probability $f_{X_1}(x_1)f_{X_2}(x_2)$. Then you get the convolution $$\mathbb{P}(Z=z) = \sum_{\text{all pairs }x_1+x_2=z} \mathbb{P}(X_1=x_1) \cdot \mathbb{P}(X_2=x_2)$$ and $$f_Z(z) = \sum_{x_1 \in \text{ domain of }X_1} f_{X_1}(x_1) f_{X_2}(z-x_1)$$ or for continuous variables $$f_Z(z) = \int_{x_1 \in \text{ domain of }X_1} f_{X_1}(x_1) f_{X_2}(z-x_1) d x_1$$ . . . . an example is a sum of two dice rolls $f_{X_2}(x) = f_{X_1}(x) = 1/6$ for $x \in \lbrace 1,2,3,4,5,6 \rbrace$ and $$f_Z(z) = \sum_{x \in \lbrace 1,2,3,4,5,6 \rbrace \\ \text{ and } z-x \in \lbrace 1,2,3,4,5,6 \rbrace} f_{X_1}(x) f_{X_2}(z-x)$$ note I choose to integrate and sum $x_1 \in \text{ domain of } X_1$, which I find more intuitive, but it is not necessary and you can integrate from $-\infty$ to $\infty$ if you define $f_{X_1}(x_1)=0$ outside the domain. Image example Let $Z$ be $X+Y$. To know $\mathbb{P}(z-\frac{1}{2}dz<Z<z+\frac{1}{2}dz)$ you will have to integrate over the probabilities for all the realizations of $x,y$ that lead to $z-\frac{1}{2}dz<Z=X+Y<z+\frac{1}{2}dz$. So that is the integral of $f(x)g(y)$ in the region $\pm \frac{1}{2}dz$ along the line $x+y=z$.
Why is the sum of two random variables a convolution? Notation, upper and lower case https://en.wikipedia.org/wiki/Notation_in_probability_and_statistics Random variables are usually written in upper case roman letters: $X$, $Y$, etc. Particular realiz
3,771
Why is the sum of two random variables a convolution?
Your confusion seems to arise from conflating random variables with their distributions. To "unlearn" this confusion, it might help to take a couple of steps back, empty your mind for a moment, forget about any fancy formalisms like probability spaces and sigma-algebras (if it helps, pretend you're back in elementary school and have never heard of any of those things!) and just think about what a random variable fundamentally represents: a number whose value we're not sure about. For example, let's say I have a six-sided die in my hand. (I really do. In fact, I have a whole bag of them.) I haven't rolled it yet, but I'm about to, and I decide to call the number that I haven't rolled yet on that die by the name "$X$". What can I say about this $X$, without actually rolling the die and determining its value? Well, I can tell that its value won't be $7$, or $-1$, or $\frac12$. In fact, I can tell for sure that it's going to be a whole number between $1$ and $6$, inclusive, because those are the only numbers marked on the die. And because I bought this bag of dice from a reputable manufacturer, I can be pretty sure that when I do roll the die and determine what number $X$ actually is, it's equally likely to be any of those six possible values, or as close to that as I can determine. In other words, my $X$ is an integer-valued random variable uniformly distributed over the set $\{1,2,3,4,5,6\}$. OK, but surely all that is obvious, so why do I keep belaboring such trivial things that you surely know already? It's because I want to make another point, which is also trivial yet, at the same time, crucially important: I can do math with this $X$, even if I don't know its value yet! For example, I can decide to add one to the number $X$ that I'll roll on the die, and call that number by the name "$Q$". I won't know what number this $Q$ will be, since I don't know what $X$ will be until I've rolled the die, but I can still say that $Q$ will be one greater than $X$, or in mathematical terms, $Q = X+1$. And this $Q$ will also be a random variable, because I don't know its value yet; I just know it will be one greater than $X$. And because I know what values $X$ can take, and how likely it is to take each of those values, I can also determine those things for $Q$. And so can you, easily enough. You won't really need any fancy formalisms or computations to figure out that $Q$ will be a whole number between $2$ and $7$, and that it's equally likely (assuming that my die is as fair and well balanced as I think it is) to take any of those values. But there's more! I could just as well decide to, say, multiply the number $X$ that I'll roll on the die by three, and call the result $R = 3X$. And that's another random variable, and I'm sure you can figure out its distribution, too, without having to resort to any integrals or convolutions or abstract algebra. And if I really wanted, I could even decide to take the still-to-be-determined number $X$ and to fold, spindle and mutilate it divide it by two, subtract one from it and square the result. And the resulting number $S = (\frac12 X - 1)^2$ is yet another random variable; this time, it will be neither integer-valued nor uniformly distributed, but you can still figure out its distribution easily enough using just elementary logic and arithmetic. OK, so I can define new random variables by plugging my unknown die roll $X$ into various equations. So what? Well, remember when I said that I had a whole bag of dice? Let me grab another one, and call the number that I'm going to roll on that die by the name "$Y$". Those two dice I grabbed from the bag are pretty much identical — if you swapped them when I wasn't looking, I wouldn't be able to tell — so I can pretty safely assume that this $Y$ will also have the same distribution as $X$. But what I really want to do is roll both dice and count the total number of pips on each of them. And that total number of pips, which is also a random variable since I don't know it yet, I will call "$T$". How big will this number $T$ be? Well, if $X$ is the number of pips I will roll on the first die, and $Y$ is the number of pips I will roll on the second die, then $T$ will clearly be their sum, i.e. $T = X+Y$. And I can tell that, since $X$ and $Y$ are both between one and six, $T$ must be at least two and at most twelve. And since $X$ and $Y$ are both whole numbers, $T$ clearly must be a whole number as well. But how likely is $T$ to take each of its possible values between two and twelve? It's definitely not equally likely to take each of them — a bit of experimentation will reveal that it's a lot harder to roll a twelve on a pair of dice than it is to roll, say, a seven. To figure that out, let me denote the probability that I'll roll the number $a$ on the first die (the one whose result I decided to call $X$) by the expression $\Pr[X = a]$. Similarly, I'll denote the probability that I'll roll the number $b$ on the second die by $\Pr[Y = b]$. Of course, if my dice are perfectly fair and balanced, then $\Pr[X = a] = \Pr[Y = b] = \frac16$ for any $a$ and $b$ between one and six, but we might as well consider the more general case where the dice could actually be biased, and more likely to roll some numbers than others. Now, since the two die rolls will be independent (I'm certainly not planning on cheating and adjusting one of them based on the other!), the probability that I'll roll $a$ on the first die and $b$ on the second will simply be the product of those probabilities: $$\Pr[X = a \text{ and } Y = b] = \Pr[X = a] \Pr[Y = b].$$ (Note that the formula above only holds for independent pairs of random variables; it certainly wouldn't hold if we replaced $Y$ above with, say, $Q$!) Now, there are several possible values of $X$ and $Y$ that could yield the same total $T$; for example, $T = 4$ could arise just as well from $X = 1$ and $Y = 3$ as from $X = 2$ and $Y = 2$, or even from $X = 3$ and $Y = 1$. But if I had already rolled the first die, and knew the value of $X$, then I could say exactly what value I'd have to roll on the second die to reach any given total number of pips. Specifically, let's say we're interested in the probability that $T = c$, for some number $c$. Now, if I know after rolling the first die that $X = a$, then I could only get the total $T = c$ by rolling $Y = c - a$ on the second die. And of course, we already know, without rolling any dice at all, that the a priori probability of rolling $a$ on the first die and $c - a$ on the second die is $$\Pr[X = a \text{ and } Y = c-a] = \Pr[X = a] \Pr[Y = c-a].$$ But of course, there are several possible ways for me to reach the same total $c$, depending on what I end up rolling on the first die. To get the total probability $\Pr[T = c]$ of rolling $c$ pips on the two dice, I need to add up the probabilities of all the different ways I could roll that total. For example, the total probability that I'll roll a total of 4 pips on the two dice will be: $$\Pr[T = 4] = \Pr[X = 1]\Pr[Y = 3] + \Pr[X = 2]\Pr[Y = 2] + \Pr[X = 3]\Pr[Y = 1] + \Pr[X = 4]\Pr[Y = 0] + \dots$$ Note that I went a bit too far with that sum above: certainly $Y$ cannot possibly be $0$! But mathematically that's no problem; we just need to define the probability of impossible events like $Y = 0$ (or $Y = 7$ or $Y = -1$ or $Y = \frac12$) as zero. And that way, we get a generic formula for the distribution of the sum of two die rolls (or, more generally, any two independent integer-valued random variables): $$T = X + Y \implies \Pr[T = c] = \sum_{a \in \mathbb Z} \Pr[X = a]\Pr[Y = c - a].$$ And I could perfectly well stop my exposition here, without ever mentioning the word "convolution"! But of course, if you happen to know what a discrete convolution looks like, you may recognize one in the formula above. And that's one fairly advanced way of stating the elementary result derived above: the probability mass function of the sum of two integer-valued random variable is the discrete convolution of the probability mass functions of the summands. And of course, by replacing the sum with an integral and probability mass with probability density, we get an analogous result for continuously distributed random variables, too. And by sufficiently stretching the definition of a convolution, we can even make it apply to all random variables, regardless of their distribution — although at that point the formula becomes almost a tautology, since we'll have pretty much just defined the convolution of two arbitrary probability distributions to be the distribution of the sum of two independent random variables with those distributions. But even so, all this stuff with convolutions and distributions and PMFs and PDFs is really just a set of tools for calculating things about random variables. The fundamental objects that we're calculating things about are the random variables themselves, which really are just numbers whose values we're not sure about. And besides, that convolution trick only works for sums of random variables, anyway. If you wanted to know, say, the distribution of $U = XY$ or $V = X^Y$, you'd have to figure it out using elementary methods, and the result would not be a convolution. Addendum: If you'd like a generic formula for computing the distribution of the sum / product / exponential / whatever combination of two random variables, here's one way to write one: $$A = B \odot C \implies \Pr[A = a] = \sum_{b,c} \Pr[B = b \text{ and } C = c] [a = b \odot c],$$ where $\odot$ stands for an arbitrary binary operation and $[a = b \odot c]$ is an Iverson bracket, i.e. $$[a = b \odot c] = \begin{cases}1 & \text{if } a = b \odot c, \text{ and} \\ 0 & \text{otherwise}. \end{cases}$$ (Generalizing this formula for non-discrete random variables is left as an exercise in mostly pointless formalism. The discrete case is quite sufficient to illustrate the essential idea, with the non-discrete case just adding a bunch of irrelevant complications.) You can check yourself that this formula indeed works e.g. for addition and that, for the special case of adding two independent random variables, it is equivalent to the "convolution" formula given earlier. Of course, in practice, this general formula is much less useful for computation, since it involves a sum over two unbounded variables instead of just one. But unlike the single-sum formula, it works for arbitrary functions of two random variables, even non-invertible ones, and it also explicitly shows the operation $\odot$ instead of disguising it as its inverse (like the "convolution" formula disguises addition as subtraction). Ps. I just rolled the dice. It turns out that $X = 5$ and $Y = 6$, which implies that $Q = 6$, $R = 15$, $S = 2.25$, $T = 11$, $U = 30$ and $V = 15625$. Now you know. ;-)
Why is the sum of two random variables a convolution?
Your confusion seems to arise from conflating random variables with their distributions. To "unlearn" this confusion, it might help to take a couple of steps back, empty your mind for a moment, forget
Why is the sum of two random variables a convolution? Your confusion seems to arise from conflating random variables with their distributions. To "unlearn" this confusion, it might help to take a couple of steps back, empty your mind for a moment, forget about any fancy formalisms like probability spaces and sigma-algebras (if it helps, pretend you're back in elementary school and have never heard of any of those things!) and just think about what a random variable fundamentally represents: a number whose value we're not sure about. For example, let's say I have a six-sided die in my hand. (I really do. In fact, I have a whole bag of them.) I haven't rolled it yet, but I'm about to, and I decide to call the number that I haven't rolled yet on that die by the name "$X$". What can I say about this $X$, without actually rolling the die and determining its value? Well, I can tell that its value won't be $7$, or $-1$, or $\frac12$. In fact, I can tell for sure that it's going to be a whole number between $1$ and $6$, inclusive, because those are the only numbers marked on the die. And because I bought this bag of dice from a reputable manufacturer, I can be pretty sure that when I do roll the die and determine what number $X$ actually is, it's equally likely to be any of those six possible values, or as close to that as I can determine. In other words, my $X$ is an integer-valued random variable uniformly distributed over the set $\{1,2,3,4,5,6\}$. OK, but surely all that is obvious, so why do I keep belaboring such trivial things that you surely know already? It's because I want to make another point, which is also trivial yet, at the same time, crucially important: I can do math with this $X$, even if I don't know its value yet! For example, I can decide to add one to the number $X$ that I'll roll on the die, and call that number by the name "$Q$". I won't know what number this $Q$ will be, since I don't know what $X$ will be until I've rolled the die, but I can still say that $Q$ will be one greater than $X$, or in mathematical terms, $Q = X+1$. And this $Q$ will also be a random variable, because I don't know its value yet; I just know it will be one greater than $X$. And because I know what values $X$ can take, and how likely it is to take each of those values, I can also determine those things for $Q$. And so can you, easily enough. You won't really need any fancy formalisms or computations to figure out that $Q$ will be a whole number between $2$ and $7$, and that it's equally likely (assuming that my die is as fair and well balanced as I think it is) to take any of those values. But there's more! I could just as well decide to, say, multiply the number $X$ that I'll roll on the die by three, and call the result $R = 3X$. And that's another random variable, and I'm sure you can figure out its distribution, too, without having to resort to any integrals or convolutions or abstract algebra. And if I really wanted, I could even decide to take the still-to-be-determined number $X$ and to fold, spindle and mutilate it divide it by two, subtract one from it and square the result. And the resulting number $S = (\frac12 X - 1)^2$ is yet another random variable; this time, it will be neither integer-valued nor uniformly distributed, but you can still figure out its distribution easily enough using just elementary logic and arithmetic. OK, so I can define new random variables by plugging my unknown die roll $X$ into various equations. So what? Well, remember when I said that I had a whole bag of dice? Let me grab another one, and call the number that I'm going to roll on that die by the name "$Y$". Those two dice I grabbed from the bag are pretty much identical — if you swapped them when I wasn't looking, I wouldn't be able to tell — so I can pretty safely assume that this $Y$ will also have the same distribution as $X$. But what I really want to do is roll both dice and count the total number of pips on each of them. And that total number of pips, which is also a random variable since I don't know it yet, I will call "$T$". How big will this number $T$ be? Well, if $X$ is the number of pips I will roll on the first die, and $Y$ is the number of pips I will roll on the second die, then $T$ will clearly be their sum, i.e. $T = X+Y$. And I can tell that, since $X$ and $Y$ are both between one and six, $T$ must be at least two and at most twelve. And since $X$ and $Y$ are both whole numbers, $T$ clearly must be a whole number as well. But how likely is $T$ to take each of its possible values between two and twelve? It's definitely not equally likely to take each of them — a bit of experimentation will reveal that it's a lot harder to roll a twelve on a pair of dice than it is to roll, say, a seven. To figure that out, let me denote the probability that I'll roll the number $a$ on the first die (the one whose result I decided to call $X$) by the expression $\Pr[X = a]$. Similarly, I'll denote the probability that I'll roll the number $b$ on the second die by $\Pr[Y = b]$. Of course, if my dice are perfectly fair and balanced, then $\Pr[X = a] = \Pr[Y = b] = \frac16$ for any $a$ and $b$ between one and six, but we might as well consider the more general case where the dice could actually be biased, and more likely to roll some numbers than others. Now, since the two die rolls will be independent (I'm certainly not planning on cheating and adjusting one of them based on the other!), the probability that I'll roll $a$ on the first die and $b$ on the second will simply be the product of those probabilities: $$\Pr[X = a \text{ and } Y = b] = \Pr[X = a] \Pr[Y = b].$$ (Note that the formula above only holds for independent pairs of random variables; it certainly wouldn't hold if we replaced $Y$ above with, say, $Q$!) Now, there are several possible values of $X$ and $Y$ that could yield the same total $T$; for example, $T = 4$ could arise just as well from $X = 1$ and $Y = 3$ as from $X = 2$ and $Y = 2$, or even from $X = 3$ and $Y = 1$. But if I had already rolled the first die, and knew the value of $X$, then I could say exactly what value I'd have to roll on the second die to reach any given total number of pips. Specifically, let's say we're interested in the probability that $T = c$, for some number $c$. Now, if I know after rolling the first die that $X = a$, then I could only get the total $T = c$ by rolling $Y = c - a$ on the second die. And of course, we already know, without rolling any dice at all, that the a priori probability of rolling $a$ on the first die and $c - a$ on the second die is $$\Pr[X = a \text{ and } Y = c-a] = \Pr[X = a] \Pr[Y = c-a].$$ But of course, there are several possible ways for me to reach the same total $c$, depending on what I end up rolling on the first die. To get the total probability $\Pr[T = c]$ of rolling $c$ pips on the two dice, I need to add up the probabilities of all the different ways I could roll that total. For example, the total probability that I'll roll a total of 4 pips on the two dice will be: $$\Pr[T = 4] = \Pr[X = 1]\Pr[Y = 3] + \Pr[X = 2]\Pr[Y = 2] + \Pr[X = 3]\Pr[Y = 1] + \Pr[X = 4]\Pr[Y = 0] + \dots$$ Note that I went a bit too far with that sum above: certainly $Y$ cannot possibly be $0$! But mathematically that's no problem; we just need to define the probability of impossible events like $Y = 0$ (or $Y = 7$ or $Y = -1$ or $Y = \frac12$) as zero. And that way, we get a generic formula for the distribution of the sum of two die rolls (or, more generally, any two independent integer-valued random variables): $$T = X + Y \implies \Pr[T = c] = \sum_{a \in \mathbb Z} \Pr[X = a]\Pr[Y = c - a].$$ And I could perfectly well stop my exposition here, without ever mentioning the word "convolution"! But of course, if you happen to know what a discrete convolution looks like, you may recognize one in the formula above. And that's one fairly advanced way of stating the elementary result derived above: the probability mass function of the sum of two integer-valued random variable is the discrete convolution of the probability mass functions of the summands. And of course, by replacing the sum with an integral and probability mass with probability density, we get an analogous result for continuously distributed random variables, too. And by sufficiently stretching the definition of a convolution, we can even make it apply to all random variables, regardless of their distribution — although at that point the formula becomes almost a tautology, since we'll have pretty much just defined the convolution of two arbitrary probability distributions to be the distribution of the sum of two independent random variables with those distributions. But even so, all this stuff with convolutions and distributions and PMFs and PDFs is really just a set of tools for calculating things about random variables. The fundamental objects that we're calculating things about are the random variables themselves, which really are just numbers whose values we're not sure about. And besides, that convolution trick only works for sums of random variables, anyway. If you wanted to know, say, the distribution of $U = XY$ or $V = X^Y$, you'd have to figure it out using elementary methods, and the result would not be a convolution. Addendum: If you'd like a generic formula for computing the distribution of the sum / product / exponential / whatever combination of two random variables, here's one way to write one: $$A = B \odot C \implies \Pr[A = a] = \sum_{b,c} \Pr[B = b \text{ and } C = c] [a = b \odot c],$$ where $\odot$ stands for an arbitrary binary operation and $[a = b \odot c]$ is an Iverson bracket, i.e. $$[a = b \odot c] = \begin{cases}1 & \text{if } a = b \odot c, \text{ and} \\ 0 & \text{otherwise}. \end{cases}$$ (Generalizing this formula for non-discrete random variables is left as an exercise in mostly pointless formalism. The discrete case is quite sufficient to illustrate the essential idea, with the non-discrete case just adding a bunch of irrelevant complications.) You can check yourself that this formula indeed works e.g. for addition and that, for the special case of adding two independent random variables, it is equivalent to the "convolution" formula given earlier. Of course, in practice, this general formula is much less useful for computation, since it involves a sum over two unbounded variables instead of just one. But unlike the single-sum formula, it works for arbitrary functions of two random variables, even non-invertible ones, and it also explicitly shows the operation $\odot$ instead of disguising it as its inverse (like the "convolution" formula disguises addition as subtraction). Ps. I just rolled the dice. It turns out that $X = 5$ and $Y = 6$, which implies that $Q = 6$, $R = 15$, $S = 2.25$, $T = 11$, $U = 30$ and $V = 15625$. Now you know. ;-)
Why is the sum of two random variables a convolution? Your confusion seems to arise from conflating random variables with their distributions. To "unlearn" this confusion, it might help to take a couple of steps back, empty your mind for a moment, forget
3,772
Why is the sum of two random variables a convolution?
Actually I don't think this is quite right, unless I'm misunderstanding you. If $X$ and $Y$ are independent random variables, then the sum/convolution relationship you're referring to is as follows: $$ p(X+Y) = p(X)*p(Y) $$ That is, the probability density function (pdf) of the sum is equal to the convolution (denoted by the $*$ operator) of the individual pdf's of $X$ and $Y$. To see why this is, consider that for a fixed value of $X=x$, the sum $S=X+Y$ follows the pdf of $Y$, shifted by an amount $x$. So if you consider all possible values of $X$, the distribution of $S$ is given by replacing each point in $p(X)$ by a copy of $p(Y)$ centered on that point (or vice versa), and then summing over all these copies, which is exactly what a convolution is. Formally, we can write this as: $$ p(S) = \int p_Y(S-x)p_X(x)dx $$ or, equivalently: $$ p(S) = \int p_X(S-y)p_Y(y)dy $$ Edit: To hopefully clear up some confusion, let me summarize some of the things I said in comments. The sum of two random variables $X$ and $Y$ does not refer to the sum of their distributions. It refers to the result of summing their realizations. To repeat the example I gave in the comments, suppose $X$ and $Y$ are the numbers thrown with a roll of two dice ($X$ being the number thrown with one die, and $Y$ the number thrown with the other). Then let's define $S=X+Y$ as the total number thrown with the two dice together. For example, for a given dice roll, we might throw a 3 and a 5, and so the sum would be 8. The question now is: what does the distribution of this sum look like, and how does it relate to the individual distributions of $X$ and $Y$? In this specific example, the number thrown with each die follows a (discrete) uniform distribution between [1, 6]. The sum follows a triangular distribution between [1, 12], with a peak at 7. As it turns out, this triangular distribution can be obtained by convolving the uniform distributions of $X$ and $Y$, and this property actually holds for all sums of (independent) random variables.
Why is the sum of two random variables a convolution?
Actually I don't think this is quite right, unless I'm misunderstanding you. If $X$ and $Y$ are independent random variables, then the sum/convolution relationship you're referring to is as follows:
Why is the sum of two random variables a convolution? Actually I don't think this is quite right, unless I'm misunderstanding you. If $X$ and $Y$ are independent random variables, then the sum/convolution relationship you're referring to is as follows: $$ p(X+Y) = p(X)*p(Y) $$ That is, the probability density function (pdf) of the sum is equal to the convolution (denoted by the $*$ operator) of the individual pdf's of $X$ and $Y$. To see why this is, consider that for a fixed value of $X=x$, the sum $S=X+Y$ follows the pdf of $Y$, shifted by an amount $x$. So if you consider all possible values of $X$, the distribution of $S$ is given by replacing each point in $p(X)$ by a copy of $p(Y)$ centered on that point (or vice versa), and then summing over all these copies, which is exactly what a convolution is. Formally, we can write this as: $$ p(S) = \int p_Y(S-x)p_X(x)dx $$ or, equivalently: $$ p(S) = \int p_X(S-y)p_Y(y)dy $$ Edit: To hopefully clear up some confusion, let me summarize some of the things I said in comments. The sum of two random variables $X$ and $Y$ does not refer to the sum of their distributions. It refers to the result of summing their realizations. To repeat the example I gave in the comments, suppose $X$ and $Y$ are the numbers thrown with a roll of two dice ($X$ being the number thrown with one die, and $Y$ the number thrown with the other). Then let's define $S=X+Y$ as the total number thrown with the two dice together. For example, for a given dice roll, we might throw a 3 and a 5, and so the sum would be 8. The question now is: what does the distribution of this sum look like, and how does it relate to the individual distributions of $X$ and $Y$? In this specific example, the number thrown with each die follows a (discrete) uniform distribution between [1, 6]. The sum follows a triangular distribution between [1, 12], with a peak at 7. As it turns out, this triangular distribution can be obtained by convolving the uniform distributions of $X$ and $Y$, and this property actually holds for all sums of (independent) random variables.
Why is the sum of two random variables a convolution? Actually I don't think this is quite right, unless I'm misunderstanding you. If $X$ and $Y$ are independent random variables, then the sum/convolution relationship you're referring to is as follows:
3,773
Why is the sum of two random variables a convolution?
Start by considering the set of all possible distinct outcomes of a process or experiment. Let $X$ be a rule (as yet unspecified) for assigning a number to any given outcome $\omega$; let $Y$ be too. Then $S=X+Y$ states a new rule $S$ for assigning a number to any given outcome: add the number you get from following rule $X$ to the number you get from following rule $Y$. We can stop there. Why shouldn't $S=X+Y$ be called a sum? If we go on to define a probability space, the mass (or density) function of the random variable (for that's what our rules are now) $S=X + Y$ can be got by convolving the mass (or density) function of $X$ with that of $Y$ (when they're independent). Here "convolving" has its usual mathematical sense. But people often talk of convolving distributions, which is harmless; or sometimes even of convolving random variables, which apparently isn't—if it suggests reading "$X + Y$" as "$X \ \mathrm{convoluted\ with} \ Y$", & therefore that the "$+$" in the former represents a complex operation somehow analogous to, or extending the idea of, addition rather than addition plain & simple. I hope it's clear from the exposition above, stopping where I said we could, that $X+Y$ already makes perfect sense before probability is even brought into the picture. In mathematical terms, random variables are functions whose co-domain is the set of real numbers & whose domain is the set of all outcomes. So the "$+$" in "$X + Y$" (or "$X(\omega) + Y(\omega)$", to show their arguments explicitly) bears exactly the same meaning as the "$+$" in "$\sin(\theta)+\cos(\theta)$". It's fine to think about how you'd sum vectors of realized values, if it aids intuition; but that oughtn't to engender confusion about the notation used for sums of random variables themselves. [This answer merely tries to draw together succintly points made by @MartijnWeterings, @IlmariKaronen, @RubenvanBergen, & @whuber in their answers & comments. I thought it might help to come from the direction of explaining what a random variable is rather than what a convolution is. Thank you all!]
Why is the sum of two random variables a convolution?
Start by considering the set of all possible distinct outcomes of a process or experiment. Let $X$ be a rule (as yet unspecified) for assigning a number to any given outcome $\omega$; let $Y$ be too.
Why is the sum of two random variables a convolution? Start by considering the set of all possible distinct outcomes of a process or experiment. Let $X$ be a rule (as yet unspecified) for assigning a number to any given outcome $\omega$; let $Y$ be too. Then $S=X+Y$ states a new rule $S$ for assigning a number to any given outcome: add the number you get from following rule $X$ to the number you get from following rule $Y$. We can stop there. Why shouldn't $S=X+Y$ be called a sum? If we go on to define a probability space, the mass (or density) function of the random variable (for that's what our rules are now) $S=X + Y$ can be got by convolving the mass (or density) function of $X$ with that of $Y$ (when they're independent). Here "convolving" has its usual mathematical sense. But people often talk of convolving distributions, which is harmless; or sometimes even of convolving random variables, which apparently isn't—if it suggests reading "$X + Y$" as "$X \ \mathrm{convoluted\ with} \ Y$", & therefore that the "$+$" in the former represents a complex operation somehow analogous to, or extending the idea of, addition rather than addition plain & simple. I hope it's clear from the exposition above, stopping where I said we could, that $X+Y$ already makes perfect sense before probability is even brought into the picture. In mathematical terms, random variables are functions whose co-domain is the set of real numbers & whose domain is the set of all outcomes. So the "$+$" in "$X + Y$" (or "$X(\omega) + Y(\omega)$", to show their arguments explicitly) bears exactly the same meaning as the "$+$" in "$\sin(\theta)+\cos(\theta)$". It's fine to think about how you'd sum vectors of realized values, if it aids intuition; but that oughtn't to engender confusion about the notation used for sums of random variables themselves. [This answer merely tries to draw together succintly points made by @MartijnWeterings, @IlmariKaronen, @RubenvanBergen, & @whuber in their answers & comments. I thought it might help to come from the direction of explaining what a random variable is rather than what a convolution is. Thank you all!]
Why is the sum of two random variables a convolution? Start by considering the set of all possible distinct outcomes of a process or experiment. Let $X$ be a rule (as yet unspecified) for assigning a number to any given outcome $\omega$; let $Y$ be too.
3,774
Why is the sum of two random variables a convolution?
In response to your "Notice", um, ... no. Let $X$, $Y$, and $Z$ be random variables and let $Z = X+Y$. Then, once you choose $Z$ and $X$, you force $Y = Z - X$. You make these two choices, in this order, when you write $$ P(Z = z) = \int P(X = x) P(Y = z - x) \mathrm{d}x \text{.} $$ But that's a convolution.
Why is the sum of two random variables a convolution?
In response to your "Notice", um, ... no. Let $X$, $Y$, and $Z$ be random variables and let $Z = X+Y$. Then, once you choose $Z$ and $X$, you force $Y = Z - X$. You make these two choices, in this o
Why is the sum of two random variables a convolution? In response to your "Notice", um, ... no. Let $X$, $Y$, and $Z$ be random variables and let $Z = X+Y$. Then, once you choose $Z$ and $X$, you force $Y = Z - X$. You make these two choices, in this order, when you write $$ P(Z = z) = \int P(X = x) P(Y = z - x) \mathrm{d}x \text{.} $$ But that's a convolution.
Why is the sum of two random variables a convolution? In response to your "Notice", um, ... no. Let $X$, $Y$, and $Z$ be random variables and let $Z = X+Y$. Then, once you choose $Z$ and $X$, you force $Y = Z - X$. You make these two choices, in this o
3,775
Why is the sum of two random variables a convolution?
The reason is the same that products of power functions are related to convolutions. The convolution always appears naturally, if you combine to objects which have a range (e.g. the powers of two power functions or the range of the PDFs) and where the new range appears as the sum of the original ranges. It is easiest to see for medium values. For $x + y$ to have medium value, either both have to have medium values, or if one has a high value, the other has to have a low value and vice versa. This matches with the form of the convolution, which has one index going from high values to low values while the other increases. If you look at the formula for the convolution (for discrete values, just because I find it easier to see there) $(f * g)(n) = \sum_k f(k)g(n-k)$ then you see that the sum of the parameters to the functions($n-k$ and $k$) always sums exactly to $n$. Thus what the convolution is actually doing, it is summing all possible combinations, which have the same value. For power functions we get $(a_0+a_1x^1+a_2x^2+\ldots+a_nx^n)\cdot(b_0+b_1x^1+b_2x^2+\ldots+b_mx^m)=\sum_{i=0}^{m+n}\sum_k a_k*b_{i-k}x^i$ which has the same pattern of combining either high exponents from the left with low exponents from the right or vice versa, to always get the same sum. Once you see, what the convolution is actually doing here, i.e. which terms are being combined and why it must, therefore, appear in many places, the reason for convolving random variables should become quite obvious.
Why is the sum of two random variables a convolution?
The reason is the same that products of power functions are related to convolutions. The convolution always appears naturally, if you combine to objects which have a range (e.g. the powers of two powe
Why is the sum of two random variables a convolution? The reason is the same that products of power functions are related to convolutions. The convolution always appears naturally, if you combine to objects which have a range (e.g. the powers of two power functions or the range of the PDFs) and where the new range appears as the sum of the original ranges. It is easiest to see for medium values. For $x + y$ to have medium value, either both have to have medium values, or if one has a high value, the other has to have a low value and vice versa. This matches with the form of the convolution, which has one index going from high values to low values while the other increases. If you look at the formula for the convolution (for discrete values, just because I find it easier to see there) $(f * g)(n) = \sum_k f(k)g(n-k)$ then you see that the sum of the parameters to the functions($n-k$ and $k$) always sums exactly to $n$. Thus what the convolution is actually doing, it is summing all possible combinations, which have the same value. For power functions we get $(a_0+a_1x^1+a_2x^2+\ldots+a_nx^n)\cdot(b_0+b_1x^1+b_2x^2+\ldots+b_mx^m)=\sum_{i=0}^{m+n}\sum_k a_k*b_{i-k}x^i$ which has the same pattern of combining either high exponents from the left with low exponents from the right or vice versa, to always get the same sum. Once you see, what the convolution is actually doing here, i.e. which terms are being combined and why it must, therefore, appear in many places, the reason for convolving random variables should become quite obvious.
Why is the sum of two random variables a convolution? The reason is the same that products of power functions are related to convolutions. The convolution always appears naturally, if you combine to objects which have a range (e.g. the powers of two powe
3,776
Why is the sum of two random variables a convolution?
This question may be old, but I'd like to provide yet another perspective. It builds on a formula for a change in variable in a joint probability density. It can be found in Lecture Notes: Probability and Random Processes at KTH, 2017 Ed. (Koski, T., 2017, pp 67), which itself refers to a detailed proof in Analysens Grunder, del 2 (Neymark, M., 1970, pp 148-168): Let a random vector $\mathbf{X} = (X_1, X_2,...,X_m)$ have the joint p.d.f. $f_\mathbf{X}(x_1,x_2,...,x_m)$. Define a new random vector $\mathbf{Y}=(Y_1, Y_2,...,Y_m)$ by $$ Y_i = g_i(X_1,X_2,...,X_m), \hspace{2em}i=1,2,...,m $$ where $g_i$ is continuously differntiable and $(g_1,g_2,...,g_m)$ is invertible with the inverse $$ X_i = h_i(Y_1,Y_2,...,Y_m),\hspace{2em}i=1,2,...,m $$ Then the joint p.d.f. of $\mathbf{Y}$ (in the domain of invertibility) is $$ f_\mathbf{Y}(y_1,y_2,...,y_m) = f_\mathbf{X}(h_1(x_1,x_2,...,x_m),h_2(x_1,x_2,...,x_m),...,h_m(x_1,x_2,...,x_m))|J| $$ where $J$ is the Jacobian determinant $$ J = \begin{vmatrix} \frac{\partial x_1}{\partial y_1} & \frac{\partial x_1}{\partial y_2} & ... & \frac{\partial x_1}{\partial y_m}\\ \frac{\partial x_2}{\partial y_1} & \frac{\partial x_2}{\partial y_2} & ... & \frac{\partial x_2}{\partial y_m}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial x_m}{\partial y_1} & \frac{\partial x_m}{\partial y_2} & ... & \frac{\partial x_m}{\partial y_m}\\ \end{vmatrix} $$ Now, let's apply this formula to obtain the joint p.d.f. of a sum of i.r.vs $X_1 + X_2$: Define the random vector $\mathbf{X} = (X_1,X_2)$ with unknown joint p.d.f. $f_\mathbf{X}(x_1,x_2)$. Next, define a random vector $\mathbf{Y}=(Y_1,Y_2)$ by $$ Y_1 = g_1(X_1,X_2) = X_1 + X_2\\ Y_2 = g_2(X_1,X_2) = X_2. $$ The inverse map is then $$ X_1 = h_1(Y_1,Y_2) = Y_1 - Y_2\\ X_2 = h_2(Y_1,Y_2) = Y_2. $$ Thus, because of this and our assumption that $X_1$ and $X_2$ are independent, the joint p.d.f. of $\mathbf{Y}$ is $$ \begin{split} f_\mathbf{Y}(y_1,y_2) &= f_\mathbf{X}(h_1(y_1,y_2),h_2(y_1,y_2))|J|\\ & = f_\mathbf{X}(y_1 - y_2, y_2)|J|\\ & = f_{X_1}(y_1 - y_2) \cdot f_{X_2}(y_2) \cdot |J| \end{split} $$ where the Jacobian $J$ is $$ J = \begin{vmatrix} \frac{\partial x_1}{\partial y_1} & \frac{\partial x_1}{\partial y_2}\\ \frac{\partial x_2}{\partial y_1} & \frac{\partial x_2}{\partial y_2} \end{vmatrix} = \begin{vmatrix} 1 & -1\\ 0 & 1 \end{vmatrix} = 1 $$ To find the p.d.f. of $Y_1 = X_1 + X_2$, we marginalize $$ \begin{split} f_{Y_1} &= \int_{-\infty}^\infty f_\mathbf{Y}(y_1,y_2) dy_2\\ &= \int_{-\infty}^\infty f_\mathbf{X}(h_1(y_1,y_2),h_2(y_1,y_2))|J| dy_2\\ &= \int_{-\infty}^\infty f_{X_1}(y_1 - y_2) \cdot f_{X_2}(y_2) dy_2 \end{split} $$ which is where we find your convolution :D
Why is the sum of two random variables a convolution?
This question may be old, but I'd like to provide yet another perspective. It builds on a formula for a change in variable in a joint probability density. It can be found in Lecture Notes: Probability
Why is the sum of two random variables a convolution? This question may be old, but I'd like to provide yet another perspective. It builds on a formula for a change in variable in a joint probability density. It can be found in Lecture Notes: Probability and Random Processes at KTH, 2017 Ed. (Koski, T., 2017, pp 67), which itself refers to a detailed proof in Analysens Grunder, del 2 (Neymark, M., 1970, pp 148-168): Let a random vector $\mathbf{X} = (X_1, X_2,...,X_m)$ have the joint p.d.f. $f_\mathbf{X}(x_1,x_2,...,x_m)$. Define a new random vector $\mathbf{Y}=(Y_1, Y_2,...,Y_m)$ by $$ Y_i = g_i(X_1,X_2,...,X_m), \hspace{2em}i=1,2,...,m $$ where $g_i$ is continuously differntiable and $(g_1,g_2,...,g_m)$ is invertible with the inverse $$ X_i = h_i(Y_1,Y_2,...,Y_m),\hspace{2em}i=1,2,...,m $$ Then the joint p.d.f. of $\mathbf{Y}$ (in the domain of invertibility) is $$ f_\mathbf{Y}(y_1,y_2,...,y_m) = f_\mathbf{X}(h_1(x_1,x_2,...,x_m),h_2(x_1,x_2,...,x_m),...,h_m(x_1,x_2,...,x_m))|J| $$ where $J$ is the Jacobian determinant $$ J = \begin{vmatrix} \frac{\partial x_1}{\partial y_1} & \frac{\partial x_1}{\partial y_2} & ... & \frac{\partial x_1}{\partial y_m}\\ \frac{\partial x_2}{\partial y_1} & \frac{\partial x_2}{\partial y_2} & ... & \frac{\partial x_2}{\partial y_m}\\ \vdots & \vdots & \ddots & \vdots \\ \frac{\partial x_m}{\partial y_1} & \frac{\partial x_m}{\partial y_2} & ... & \frac{\partial x_m}{\partial y_m}\\ \end{vmatrix} $$ Now, let's apply this formula to obtain the joint p.d.f. of a sum of i.r.vs $X_1 + X_2$: Define the random vector $\mathbf{X} = (X_1,X_2)$ with unknown joint p.d.f. $f_\mathbf{X}(x_1,x_2)$. Next, define a random vector $\mathbf{Y}=(Y_1,Y_2)$ by $$ Y_1 = g_1(X_1,X_2) = X_1 + X_2\\ Y_2 = g_2(X_1,X_2) = X_2. $$ The inverse map is then $$ X_1 = h_1(Y_1,Y_2) = Y_1 - Y_2\\ X_2 = h_2(Y_1,Y_2) = Y_2. $$ Thus, because of this and our assumption that $X_1$ and $X_2$ are independent, the joint p.d.f. of $\mathbf{Y}$ is $$ \begin{split} f_\mathbf{Y}(y_1,y_2) &= f_\mathbf{X}(h_1(y_1,y_2),h_2(y_1,y_2))|J|\\ & = f_\mathbf{X}(y_1 - y_2, y_2)|J|\\ & = f_{X_1}(y_1 - y_2) \cdot f_{X_2}(y_2) \cdot |J| \end{split} $$ where the Jacobian $J$ is $$ J = \begin{vmatrix} \frac{\partial x_1}{\partial y_1} & \frac{\partial x_1}{\partial y_2}\\ \frac{\partial x_2}{\partial y_1} & \frac{\partial x_2}{\partial y_2} \end{vmatrix} = \begin{vmatrix} 1 & -1\\ 0 & 1 \end{vmatrix} = 1 $$ To find the p.d.f. of $Y_1 = X_1 + X_2$, we marginalize $$ \begin{split} f_{Y_1} &= \int_{-\infty}^\infty f_\mathbf{Y}(y_1,y_2) dy_2\\ &= \int_{-\infty}^\infty f_\mathbf{X}(h_1(y_1,y_2),h_2(y_1,y_2))|J| dy_2\\ &= \int_{-\infty}^\infty f_{X_1}(y_1 - y_2) \cdot f_{X_2}(y_2) dy_2 \end{split} $$ which is where we find your convolution :D
Why is the sum of two random variables a convolution? This question may be old, but I'd like to provide yet another perspective. It builds on a formula for a change in variable in a joint probability density. It can be found in Lecture Notes: Probability
3,777
Why is the sum of two random variables a convolution?
Let us prove the supposition for the continuous case, and then explain and illustrate it using histograms built up from random numbers, and the sums formed by adding ordered pairs of numbers such that the discrete convolution, and both random variables are all of length $n$. From Grinstead CM, Snell JL. Introduction to probability: American Mathematical Soc.; 2012. Ch. 7, Exercise 1: Let $X$ and $Y$ be independent real-valued random variables with density functions $f_X (x)$ and $f_Y (y)$, respectively. Show that the density function of the sum $X + Y$ is the convolution of the functions $f_X (x)$ and $f_Y (y)$. Let $Z$ be the joint random variable $(X, Y )$. Then the joint density function of $Z$ is $f_X (x)f_Y (y)$, since $X$ and $Y$ are independent. Now compute the probability that $X + Y ≤ z$, by integrating the joint density function over the appropriate region in the plane. This gives the cumulative distribution function of $Z$. $$F_Z(z)=\mathrm{P}(X+Y\leq z)= \int_{(x,y):x+y\leq z} f_X(x)\,f_Y (y)\,dy\,dx$$ $$= \int_{-\infty}^\infty f_X(x)\left[\int_{y\,\leq \,z-x} f_Y(y)\,dy \right] dx= \int_{-\infty}^\infty f_X(x)\left[F_Y(z−x)\right]\,dx.$$ Now differentiate this function with respect to $z$ to obtain the density function of $z$. $$f_Z(z) = \frac{dF_Z(z)}{dz} = \int_{-\infty}^\infty f_X(x)\,f_Y ( z-x)\,dx.$$ To appreciate what this means in practice, this was next illustrated with an example. The realization of a random number element (statistics: outcome, computer science: instance) from a distribution can be viewed as taking the inverse cumulative density function of a probability density function of a random probability. (A random probability is, computationally, a single element from a uniform distribution on the [0,1] interval.) This gives us a single value on the $x$-axis. Next, we generate another $x$-axis second random element from the inverse CDF of another, possibly different, PDF of a second, different random probability. We then have two random elements. When added, the two $x$-values so generated become a third element, and, notice what has happened. The two elements now become a single element of magnitude $x1+x2$, i.e., information has been lost. This is the context in which the "addition" is taking place; it is the addition of $x$-values. When multiple repetitions of this type of addition take place the resulting density of realizations (outcome density) of the sums tends toward the PDF of the convolution of the individual densities. The overall information loss results in smoothing (or density dispersion) of the convolution (or sums) compared to the constituting PDF's (or summands). Another effect is location shifting of the convolution (or sums). Note that realizations (outcomes, instances) of multiple elements afford only sparse elements populating (exemplifying) a continuous sample space. For example, 1000 random values were created using a gamma distribution with a shape of $10/9$, and a scale of $2$. These were added pairwise to 1000 random values from a normal distribution with a mean of 4 and a standard deviation of $1/4$. Density scaled histograms of each of the three groups of values were co-plotted (left panel below) and contrasted (right panel below) with the density functions used to generate the random data, as well as the convolution of those density functions. As seen in the figure, the addition of summands explanation appears to be plausible as the kernel smoothed distributions of data (red) in the left hand panel are similar to the continuous density functions and their convolution in the right hand panel.
Why is the sum of two random variables a convolution?
Let us prove the supposition for the continuous case, and then explain and illustrate it using histograms built up from random numbers, and the sums formed by adding ordered pairs of numbers such that
Why is the sum of two random variables a convolution? Let us prove the supposition for the continuous case, and then explain and illustrate it using histograms built up from random numbers, and the sums formed by adding ordered pairs of numbers such that the discrete convolution, and both random variables are all of length $n$. From Grinstead CM, Snell JL. Introduction to probability: American Mathematical Soc.; 2012. Ch. 7, Exercise 1: Let $X$ and $Y$ be independent real-valued random variables with density functions $f_X (x)$ and $f_Y (y)$, respectively. Show that the density function of the sum $X + Y$ is the convolution of the functions $f_X (x)$ and $f_Y (y)$. Let $Z$ be the joint random variable $(X, Y )$. Then the joint density function of $Z$ is $f_X (x)f_Y (y)$, since $X$ and $Y$ are independent. Now compute the probability that $X + Y ≤ z$, by integrating the joint density function over the appropriate region in the plane. This gives the cumulative distribution function of $Z$. $$F_Z(z)=\mathrm{P}(X+Y\leq z)= \int_{(x,y):x+y\leq z} f_X(x)\,f_Y (y)\,dy\,dx$$ $$= \int_{-\infty}^\infty f_X(x)\left[\int_{y\,\leq \,z-x} f_Y(y)\,dy \right] dx= \int_{-\infty}^\infty f_X(x)\left[F_Y(z−x)\right]\,dx.$$ Now differentiate this function with respect to $z$ to obtain the density function of $z$. $$f_Z(z) = \frac{dF_Z(z)}{dz} = \int_{-\infty}^\infty f_X(x)\,f_Y ( z-x)\,dx.$$ To appreciate what this means in practice, this was next illustrated with an example. The realization of a random number element (statistics: outcome, computer science: instance) from a distribution can be viewed as taking the inverse cumulative density function of a probability density function of a random probability. (A random probability is, computationally, a single element from a uniform distribution on the [0,1] interval.) This gives us a single value on the $x$-axis. Next, we generate another $x$-axis second random element from the inverse CDF of another, possibly different, PDF of a second, different random probability. We then have two random elements. When added, the two $x$-values so generated become a third element, and, notice what has happened. The two elements now become a single element of magnitude $x1+x2$, i.e., information has been lost. This is the context in which the "addition" is taking place; it is the addition of $x$-values. When multiple repetitions of this type of addition take place the resulting density of realizations (outcome density) of the sums tends toward the PDF of the convolution of the individual densities. The overall information loss results in smoothing (or density dispersion) of the convolution (or sums) compared to the constituting PDF's (or summands). Another effect is location shifting of the convolution (or sums). Note that realizations (outcomes, instances) of multiple elements afford only sparse elements populating (exemplifying) a continuous sample space. For example, 1000 random values were created using a gamma distribution with a shape of $10/9$, and a scale of $2$. These were added pairwise to 1000 random values from a normal distribution with a mean of 4 and a standard deviation of $1/4$. Density scaled histograms of each of the three groups of values were co-plotted (left panel below) and contrasted (right panel below) with the density functions used to generate the random data, as well as the convolution of those density functions. As seen in the figure, the addition of summands explanation appears to be plausible as the kernel smoothed distributions of data (red) in the left hand panel are similar to the continuous density functions and their convolution in the right hand panel.
Why is the sum of two random variables a convolution? Let us prove the supposition for the continuous case, and then explain and illustrate it using histograms built up from random numbers, and the sums formed by adding ordered pairs of numbers such that
3,778
Why is the sum of two random variables a convolution?
General expressions for the sums of n continuous random variables are found here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0216422 "Multi-stage models for the failure of complex systems, cascading disasters, and the onset of disease" For positive random variables, the sum can be simply written in terms of a product of Laplace transforms and the inverse of their product. The method is adapted from a calculation that appeared in E.T. Jaynes "Probability Theory" textbook.
Why is the sum of two random variables a convolution?
General expressions for the sums of n continuous random variables are found here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0216422 "Multi-stage models for the failure of comp
Why is the sum of two random variables a convolution? General expressions for the sums of n continuous random variables are found here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0216422 "Multi-stage models for the failure of complex systems, cascading disasters, and the onset of disease" For positive random variables, the sum can be simply written in terms of a product of Laplace transforms and the inverse of their product. The method is adapted from a calculation that appeared in E.T. Jaynes "Probability Theory" textbook.
Why is the sum of two random variables a convolution? General expressions for the sums of n continuous random variables are found here: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0216422 "Multi-stage models for the failure of comp
3,779
Is random forest a boosting algorithm?
Random Forest is a bagging algorithm rather than a boosting algorithm. They are two opposite way to achieve a low error. We know that error can be composited from bias and variance. A too complex model has low bias but large variance, while a too simple model has low variance but large bias, both leading a high error but two different reasons. As a result, two different ways to solve the problem come into people's mind (maybe Breiman and others), variance reduction for a complex model, or bias reduction for a simple model, which refers to random forest and boosting. Random forest reduces variance of a large number of "complex" models with low bias. We can see the composition elements are not "weak" models but too complex models. If you read about the algorithm, the underlying trees are planted "somewhat" as large as "possible". The underlying trees are independent parallel models. And additional random variable selection is introduced into them to make them even more independent, which makes it perform better than ordinary bagging and entitle the name "random". While boosting reduces bias of a large number of "small" models with low variance. They are "weak" models as you quoted. The underlying elements are somehow like a "chain" or "nested" iterative model about the bias of each level. So they are not independent parallel models but each model is built based on all the former small models by weighting. That is so-called "boosting" from one by one. Breiman's papers and books discuss about trees, random forest and boosting quite a lot. It helps you to understand the principle behind the algorithm.
Is random forest a boosting algorithm?
Random Forest is a bagging algorithm rather than a boosting algorithm. They are two opposite way to achieve a low error. We know that error can be composited from bias and variance. A too complex mode
Is random forest a boosting algorithm? Random Forest is a bagging algorithm rather than a boosting algorithm. They are two opposite way to achieve a low error. We know that error can be composited from bias and variance. A too complex model has low bias but large variance, while a too simple model has low variance but large bias, both leading a high error but two different reasons. As a result, two different ways to solve the problem come into people's mind (maybe Breiman and others), variance reduction for a complex model, or bias reduction for a simple model, which refers to random forest and boosting. Random forest reduces variance of a large number of "complex" models with low bias. We can see the composition elements are not "weak" models but too complex models. If you read about the algorithm, the underlying trees are planted "somewhat" as large as "possible". The underlying trees are independent parallel models. And additional random variable selection is introduced into them to make them even more independent, which makes it perform better than ordinary bagging and entitle the name "random". While boosting reduces bias of a large number of "small" models with low variance. They are "weak" models as you quoted. The underlying elements are somehow like a "chain" or "nested" iterative model about the bias of each level. So they are not independent parallel models but each model is built based on all the former small models by weighting. That is so-called "boosting" from one by one. Breiman's papers and books discuss about trees, random forest and boosting quite a lot. It helps you to understand the principle behind the algorithm.
Is random forest a boosting algorithm? Random Forest is a bagging algorithm rather than a boosting algorithm. They are two opposite way to achieve a low error. We know that error can be composited from bias and variance. A too complex mode
3,780
Is random forest a boosting algorithm?
A random forest is not considered a boosting type of algorithm. As explained in your boosting link: ...most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners' accuracy. After a weak learner is added, the data is reweighted... An example of this iterative process is adaboost, whereby weaker results are boosted or reweighted over many iterations to have the learner focus more on areas it got wrong, and less on those observations that were correct. A random forest, in contrast, is an ensemble bagging or averaging method that aims to reduce the variance of individual trees by randomly selecting (and thus de-correlating) many trees from the dataset, and averaging them.
Is random forest a boosting algorithm?
A random forest is not considered a boosting type of algorithm. As explained in your boosting link: ...most boosting algorithms consist of iteratively learning weak classifiers with respect to a dis
Is random forest a boosting algorithm? A random forest is not considered a boosting type of algorithm. As explained in your boosting link: ...most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners' accuracy. After a weak learner is added, the data is reweighted... An example of this iterative process is adaboost, whereby weaker results are boosted or reweighted over many iterations to have the learner focus more on areas it got wrong, and less on those observations that were correct. A random forest, in contrast, is an ensemble bagging or averaging method that aims to reduce the variance of individual trees by randomly selecting (and thus de-correlating) many trees from the dataset, and averaging them.
Is random forest a boosting algorithm? A random forest is not considered a boosting type of algorithm. As explained in your boosting link: ...most boosting algorithms consist of iteratively learning weak classifiers with respect to a dis
3,781
Is random forest a boosting algorithm?
It is an extension of bagging. The procedure is as follows, you take a bootstrap sample of your data and then use this to grow a classification or regression tree (CART). This is done a predefined number of times and the prediction is then the aggregation of the individual trees predictions, it could be a majority vote (for classification) or an average (for regression). This approach is called bagging (Breiman 1994). Furthermore the candidate variable for each split of each tree is taken from a random sample of all the available independent variables. This introduces even more variability and makes the trees more diverse. This is called the random subspace method (Ho, 1998). As mentioned, this produces trees which are very diverse which translates into trees which are highly independent of each other. Because of the Jensen's inequality we know that the average of the errors of these trees predictions will be smaller or equal to the error of the average tree grown from that data set. Another way to look at it is to look at the Mean Squared Error and notice how it can be decomposed in bias and variance parts (this is related to an issue in supervised learning called the bias-variance tradeoff). Random forest achieves better accuracy by reducing variance through the averaging of the prediction of orthogonal trees. It should be noted that it inherits the bias of its trees, which is quite a discussed problem, check for example this question.
Is random forest a boosting algorithm?
It is an extension of bagging. The procedure is as follows, you take a bootstrap sample of your data and then use this to grow a classification or regression tree (CART). This is done a predefined num
Is random forest a boosting algorithm? It is an extension of bagging. The procedure is as follows, you take a bootstrap sample of your data and then use this to grow a classification or regression tree (CART). This is done a predefined number of times and the prediction is then the aggregation of the individual trees predictions, it could be a majority vote (for classification) or an average (for regression). This approach is called bagging (Breiman 1994). Furthermore the candidate variable for each split of each tree is taken from a random sample of all the available independent variables. This introduces even more variability and makes the trees more diverse. This is called the random subspace method (Ho, 1998). As mentioned, this produces trees which are very diverse which translates into trees which are highly independent of each other. Because of the Jensen's inequality we know that the average of the errors of these trees predictions will be smaller or equal to the error of the average tree grown from that data set. Another way to look at it is to look at the Mean Squared Error and notice how it can be decomposed in bias and variance parts (this is related to an issue in supervised learning called the bias-variance tradeoff). Random forest achieves better accuracy by reducing variance through the averaging of the prediction of orthogonal trees. It should be noted that it inherits the bias of its trees, which is quite a discussed problem, check for example this question.
Is random forest a boosting algorithm? It is an extension of bagging. The procedure is as follows, you take a bootstrap sample of your data and then use this to grow a classification or regression tree (CART). This is done a predefined num
3,782
Is random forest a boosting algorithm?
I believe you are confusing boosting in particular with ensemble methods in general, of which there are many. Your "definition" of boosting is not the full definition, which is elaborated on in Pat's answer. If you would like to learn more about ensemble methods, I recommend you pick up the following book: John Elder & Giovanni Seni. Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions. (2010)
Is random forest a boosting algorithm?
I believe you are confusing boosting in particular with ensemble methods in general, of which there are many. Your "definition" of boosting is not the full definition, which is elaborated on in Pat's
Is random forest a boosting algorithm? I believe you are confusing boosting in particular with ensemble methods in general, of which there are many. Your "definition" of boosting is not the full definition, which is elaborated on in Pat's answer. If you would like to learn more about ensemble methods, I recommend you pick up the following book: John Elder & Giovanni Seni. Ensemble Methods in Data Mining: Improving Accuracy Through Combining Predictions. (2010)
Is random forest a boosting algorithm? I believe you are confusing boosting in particular with ensemble methods in general, of which there are many. Your "definition" of boosting is not the full definition, which is elaborated on in Pat's
3,783
Is random forest a boosting algorithm?
Random forest is a bagging technique and not a boosting technique. In boosting as the name suggests, one is learning from other which in turn boosts the learning. The trees in random forests are run in parallel. There is no interaction between these trees while building the trees. Once all the trees are built, then a voting or average is taken across all the trees' prediction depending on whether the problem is a classification or regression problem. The trees in boosting algorithms like GBM-Gradient Boosting machine are trained sequentially. Let's say the first tree got trained and it did some predictions on the training data. Not all of these predictions would be correct. Let's say out of a total of 100 predictions, the first tree made mistake for 10 observations. Now these 10 observations would be given more weightage when building the second tree. Notice that the learning of the second tree got boosted from the learning of the first tree. Hence, the term boosting. This way, each of the trees are built sequentially over the learnings from the past trees.
Is random forest a boosting algorithm?
Random forest is a bagging technique and not a boosting technique. In boosting as the name suggests, one is learning from other which in turn boosts the learning. The trees in random forests are run
Is random forest a boosting algorithm? Random forest is a bagging technique and not a boosting technique. In boosting as the name suggests, one is learning from other which in turn boosts the learning. The trees in random forests are run in parallel. There is no interaction between these trees while building the trees. Once all the trees are built, then a voting or average is taken across all the trees' prediction depending on whether the problem is a classification or regression problem. The trees in boosting algorithms like GBM-Gradient Boosting machine are trained sequentially. Let's say the first tree got trained and it did some predictions on the training data. Not all of these predictions would be correct. Let's say out of a total of 100 predictions, the first tree made mistake for 10 observations. Now these 10 observations would be given more weightage when building the second tree. Notice that the learning of the second tree got boosted from the learning of the first tree. Hence, the term boosting. This way, each of the trees are built sequentially over the learnings from the past trees.
Is random forest a boosting algorithm? Random forest is a bagging technique and not a boosting technique. In boosting as the name suggests, one is learning from other which in turn boosts the learning. The trees in random forests are run
3,784
Is random forest a boosting algorithm?
I'd like to point out that Random Forest is not just a bagging technique. It's a bagging + random subset of the features. Definition on Wikipedia suggests that ...The above procedure describes the original bagging algorithm for trees. Random forests differ in only one way from this general scheme: they use a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. This process is sometimes called "feature bagging". So bagged tree is bagging. And random forest is baggging + "feature bagging". Also recommend to check the following link for detail info by Arvind Shukla. https://www.linkedin.com/pulse/random-forest-bagging-tree-arvind-shukla
Is random forest a boosting algorithm?
I'd like to point out that Random Forest is not just a bagging technique. It's a bagging + random subset of the features. Definition on Wikipedia suggests that ...The above procedure describes the or
Is random forest a boosting algorithm? I'd like to point out that Random Forest is not just a bagging technique. It's a bagging + random subset of the features. Definition on Wikipedia suggests that ...The above procedure describes the original bagging algorithm for trees. Random forests differ in only one way from this general scheme: they use a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. This process is sometimes called "feature bagging". So bagged tree is bagging. And random forest is baggging + "feature bagging". Also recommend to check the following link for detail info by Arvind Shukla. https://www.linkedin.com/pulse/random-forest-bagging-tree-arvind-shukla
Is random forest a boosting algorithm? I'd like to point out that Random Forest is not just a bagging technique. It's a bagging + random subset of the features. Definition on Wikipedia suggests that ...The above procedure describes the or
3,785
Cross-Entropy or Log Likelihood in Output layer
The negative log likelihood (eq.80) is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of the same formula. eq.57 is the negative log likelihood of the Bernoulli distribution, whereas eq.80 is the negative log likelihood of the multinomial distribution with one observation (a multiclass version of Bernoulli). For binary classification problems, the softmax function outputs two values (between 0 and 1 and sum to 1) to give the prediction of each class. While the sigmoid function outputs one value (between 0 and 1) to give the prediction of one class (so the other class is 1-p). So eq.80 can't be directly applied to the sigmoid output, though it is essentially the same loss as eq.57. Also see this answer. Following is a simple illustration of the connection between (sigmoid + binary cross-entropy) and (softmax + multiclass cross-entropy) for binary classification problems. Say we take $0.5$ as the split point of the two categories, for sigmoid output it follows, $$\sigma(wx+b)=0.5$$ $$wx+b=0$$ which is the decision boundary in the feature space. For softmax output it follows $$\frac{e^{w_1x+b_1}}{e^{w_1x+b_1}+e^{w_2x+b_2}}=0.5$$ $$e^{w_1x+b_1}=e^{w_2x+b_2}$$ $$w_1x+b_1=w_2x+b_2$$ $$(w_1-w_2)x+(b_1-b_2)=0$$ so it remains the same model although there are twice as many parameters. The followings show the decision boundaries obtained using theses two methods, which are almost identical.
Cross-Entropy or Log Likelihood in Output layer
The negative log likelihood (eq.80) is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of th
Cross-Entropy or Log Likelihood in Output layer The negative log likelihood (eq.80) is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of the same formula. eq.57 is the negative log likelihood of the Bernoulli distribution, whereas eq.80 is the negative log likelihood of the multinomial distribution with one observation (a multiclass version of Bernoulli). For binary classification problems, the softmax function outputs two values (between 0 and 1 and sum to 1) to give the prediction of each class. While the sigmoid function outputs one value (between 0 and 1) to give the prediction of one class (so the other class is 1-p). So eq.80 can't be directly applied to the sigmoid output, though it is essentially the same loss as eq.57. Also see this answer. Following is a simple illustration of the connection between (sigmoid + binary cross-entropy) and (softmax + multiclass cross-entropy) for binary classification problems. Say we take $0.5$ as the split point of the two categories, for sigmoid output it follows, $$\sigma(wx+b)=0.5$$ $$wx+b=0$$ which is the decision boundary in the feature space. For softmax output it follows $$\frac{e^{w_1x+b_1}}{e^{w_1x+b_1}+e^{w_2x+b_2}}=0.5$$ $$e^{w_1x+b_1}=e^{w_2x+b_2}$$ $$w_1x+b_1=w_2x+b_2$$ $$(w_1-w_2)x+(b_1-b_2)=0$$ so it remains the same model although there are twice as many parameters. The followings show the decision boundaries obtained using theses two methods, which are almost identical.
Cross-Entropy or Log Likelihood in Output layer The negative log likelihood (eq.80) is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of th
3,786
Cross-Entropy or Log Likelihood in Output layer
Expanding on @dontloo's answer, consider a classification task with $K$ classes. Let's separately look at the output layer of a network and the cost function. For our purpose here, the output layer is either sigmoid or softmax and the cost function is either cross-entropy or log-likelihood. Output Layers In the case of a sigmoid, the output layer will have $K$ sigmoids each ouputting a value between 0 and 1. Crucially, the sum of these outputs may not equal one and hence they cannot be interpreted as a probability distribution. The only exception to both statements is the case when $K=2$ i.e. binary classification, when only one sigmoid is sufficient. And, in this case, a second output can be imagined to be one minus the lone output. If the output layer is softmax, it also has $K$ outputs. But in this case, the outputs sum to one. Because of this constraint, a network with a softmax output layer has less flexibility than one with multiple sigmoids (except, of course, for $K=2$). To illustrate the constraint, consider a network used to classify digits. It has ten output nodes. If they are sigmoids then two of them, say the ones for 8 and 9 or 0 and 6, can both output, say, 0.9. In the case of softmax, this is not possible. Outputs could still be equal--both 0.45, for example--but because of the constraint, when the weights get adjusted to increase the output for one digit, it necessarily decreases the output for some other digit(s). The text has a slider demo in the same chapter to illustrate this effect. What about inference? Well, one straightforward method would be to simply assign the class which has the largest output. This is true for both types of output layers. As for the cost function, it is possible to use either cross-entropy or log-likelihood (or some other cost-function such as mean-squared error) for either networks. Let's look at that below. Cost Functions The cross-entropy cost of a $K$-class network would be $$ C_\text{CE} = -\frac{1}{n} \sum\limits_x \sum\limits_{k=1}^K (y_k \ln a_k^L + (1 - y_k) \ln (1 - a_k^L)) $$ where $x$ is an input and $n$ is the number of examples in the input set. This is equation (63) in the book. Note that, for each $x$, only one of the $y_k$ is 1 and the rest are 0 (i.e. one-hot encoding). The log-likelihood cost of a $K$-class network is $$ C_\text{LL} = -\frac{1}{n} \sum\limits_x y^T \ln(a^L) = -\frac{1}{n} \sum\limits_x \sum\limits_{k=1}^K y_k \ln(a_k^L) $$ where $y$ is the (one-hot encoded) desired output and $a^L$ is the output of the model. This is just a different way of writing equation (80) in the book. Again, note that, for each $x$, only one element of $y$ is 1 and the rest are 0. Here is the crucial difference between the two cost functions: the log-likelihood considers only the output for the corresponding class, whereas the cross-entropy function also considers the other outputs as well. You can see this in the above expressions--in the summation, both $C_\text{CE}$ and $C_\text{LL}$ have the same first term, but $C_\text{CE}$ has an additional term. What this means is that both CE and LL reward the network for the amount of output in the correct class. But CE also penalizes the network for the amounts in the other classes. If the confusion is strong then the penalty is also strong. Let's illustrate below. Example Let's look at a single sample i.e. $n=1$. Let $$ \begin{align} a^L & = [0.55, 0.02, 0.01, 0.03, 0.01, 0.05, 0.17, 0.01, 0.06, 0.09], \text{and} \\ y & = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] \end{align} $$ i.e. the input is a 0 and the network has some confusion over a 6, but it's not too bad. This output is applicable both to sigmoid and softmax output layers. The costs are $$ \begin{align} C_\text{CE} & = 1.0725 \\ C_\text{LL} & = 0.5978 \end{align} $$ Now, let's say look at another scenario where the output now indicates more confusion between a 0 and a 6. We keep the output of the 0-class the same and increase the output of the 6-class and decrease the output of the other classes. Again, this output is applicable to both sigmoid and softmax. $$ a^L = [0.55, 0.002, 0.001, 0.003, 0.001, 0.04, 0.37, 0.001, 0.012, 0.02] $$ What happens to the cost? $$ \begin{align} C_\text{CE} & = 1.1410 \\ C_\text{LL} & = 0.5978 \end{align} $$ As can be seen, the LL cost has not changed. But the CE cost has increased--it has penalized the stronger confusion of a 0 to a 6! Summary In summary, yes, the output layers and cost functions can be mixed and matched. They affect how the network behaves and how the results are to be interpreted.
Cross-Entropy or Log Likelihood in Output layer
Expanding on @dontloo's answer, consider a classification task with $K$ classes. Let's separately look at the output layer of a network and the cost function. For our purpose here, the output layer i
Cross-Entropy or Log Likelihood in Output layer Expanding on @dontloo's answer, consider a classification task with $K$ classes. Let's separately look at the output layer of a network and the cost function. For our purpose here, the output layer is either sigmoid or softmax and the cost function is either cross-entropy or log-likelihood. Output Layers In the case of a sigmoid, the output layer will have $K$ sigmoids each ouputting a value between 0 and 1. Crucially, the sum of these outputs may not equal one and hence they cannot be interpreted as a probability distribution. The only exception to both statements is the case when $K=2$ i.e. binary classification, when only one sigmoid is sufficient. And, in this case, a second output can be imagined to be one minus the lone output. If the output layer is softmax, it also has $K$ outputs. But in this case, the outputs sum to one. Because of this constraint, a network with a softmax output layer has less flexibility than one with multiple sigmoids (except, of course, for $K=2$). To illustrate the constraint, consider a network used to classify digits. It has ten output nodes. If they are sigmoids then two of them, say the ones for 8 and 9 or 0 and 6, can both output, say, 0.9. In the case of softmax, this is not possible. Outputs could still be equal--both 0.45, for example--but because of the constraint, when the weights get adjusted to increase the output for one digit, it necessarily decreases the output for some other digit(s). The text has a slider demo in the same chapter to illustrate this effect. What about inference? Well, one straightforward method would be to simply assign the class which has the largest output. This is true for both types of output layers. As for the cost function, it is possible to use either cross-entropy or log-likelihood (or some other cost-function such as mean-squared error) for either networks. Let's look at that below. Cost Functions The cross-entropy cost of a $K$-class network would be $$ C_\text{CE} = -\frac{1}{n} \sum\limits_x \sum\limits_{k=1}^K (y_k \ln a_k^L + (1 - y_k) \ln (1 - a_k^L)) $$ where $x$ is an input and $n$ is the number of examples in the input set. This is equation (63) in the book. Note that, for each $x$, only one of the $y_k$ is 1 and the rest are 0 (i.e. one-hot encoding). The log-likelihood cost of a $K$-class network is $$ C_\text{LL} = -\frac{1}{n} \sum\limits_x y^T \ln(a^L) = -\frac{1}{n} \sum\limits_x \sum\limits_{k=1}^K y_k \ln(a_k^L) $$ where $y$ is the (one-hot encoded) desired output and $a^L$ is the output of the model. This is just a different way of writing equation (80) in the book. Again, note that, for each $x$, only one element of $y$ is 1 and the rest are 0. Here is the crucial difference between the two cost functions: the log-likelihood considers only the output for the corresponding class, whereas the cross-entropy function also considers the other outputs as well. You can see this in the above expressions--in the summation, both $C_\text{CE}$ and $C_\text{LL}$ have the same first term, but $C_\text{CE}$ has an additional term. What this means is that both CE and LL reward the network for the amount of output in the correct class. But CE also penalizes the network for the amounts in the other classes. If the confusion is strong then the penalty is also strong. Let's illustrate below. Example Let's look at a single sample i.e. $n=1$. Let $$ \begin{align} a^L & = [0.55, 0.02, 0.01, 0.03, 0.01, 0.05, 0.17, 0.01, 0.06, 0.09], \text{and} \\ y & = [1, 0, 0, 0, 0, 0, 0, 0, 0, 0] \end{align} $$ i.e. the input is a 0 and the network has some confusion over a 6, but it's not too bad. This output is applicable both to sigmoid and softmax output layers. The costs are $$ \begin{align} C_\text{CE} & = 1.0725 \\ C_\text{LL} & = 0.5978 \end{align} $$ Now, let's say look at another scenario where the output now indicates more confusion between a 0 and a 6. We keep the output of the 0-class the same and increase the output of the 6-class and decrease the output of the other classes. Again, this output is applicable to both sigmoid and softmax. $$ a^L = [0.55, 0.002, 0.001, 0.003, 0.001, 0.04, 0.37, 0.001, 0.012, 0.02] $$ What happens to the cost? $$ \begin{align} C_\text{CE} & = 1.1410 \\ C_\text{LL} & = 0.5978 \end{align} $$ As can be seen, the LL cost has not changed. But the CE cost has increased--it has penalized the stronger confusion of a 0 to a 6! Summary In summary, yes, the output layers and cost functions can be mixed and matched. They affect how the network behaves and how the results are to be interpreted.
Cross-Entropy or Log Likelihood in Output layer Expanding on @dontloo's answer, consider a classification task with $K$ classes. Let's separately look at the output layer of a network and the cost function. For our purpose here, the output layer i
3,787
Cross-Entropy or Log Likelihood in Output layer
I think that @user650654 made a mistake in his formulation of the Cross Entropy and therefore his conclusion is incorrect. In the case of hard labels (i.e., using one-hot vectors for ground truth, where only one element of the vector is assigned 1 and all others are assigned 0 probability), the Cross Entropy loss and the log-likelihood are equivalent. Let's assume that $y$ and $\hat{y}$ are the one-hot ground-truth vector and the estimated probability vector that produced by applying the Softmax function. Here I note $y[i]$ as the i-th entry of the $y$ vector. Recall that the definition of Cross Entropy is: $$\text{CE}(y,\hat{y}) = -\Sigma_{i}y[i]\log(\hat{y}[i])$$ But since $y$ is a one-hot vector, only one of the elements in that sum affects the output, which is simply: $-\log(\hat{y}[i])$, and that is the negative log-likelihood.
Cross-Entropy or Log Likelihood in Output layer
I think that @user650654 made a mistake in his formulation of the Cross Entropy and therefore his conclusion is incorrect. In the case of hard labels (i.e., using one-hot vectors for ground truth, whe
Cross-Entropy or Log Likelihood in Output layer I think that @user650654 made a mistake in his formulation of the Cross Entropy and therefore his conclusion is incorrect. In the case of hard labels (i.e., using one-hot vectors for ground truth, where only one element of the vector is assigned 1 and all others are assigned 0 probability), the Cross Entropy loss and the log-likelihood are equivalent. Let's assume that $y$ and $\hat{y}$ are the one-hot ground-truth vector and the estimated probability vector that produced by applying the Softmax function. Here I note $y[i]$ as the i-th entry of the $y$ vector. Recall that the definition of Cross Entropy is: $$\text{CE}(y,\hat{y}) = -\Sigma_{i}y[i]\log(\hat{y}[i])$$ But since $y$ is a one-hot vector, only one of the elements in that sum affects the output, which is simply: $-\log(\hat{y}[i])$, and that is the negative log-likelihood.
Cross-Entropy or Log Likelihood in Output layer I think that @user650654 made a mistake in his formulation of the Cross Entropy and therefore his conclusion is incorrect. In the case of hard labels (i.e., using one-hot vectors for ground truth, whe
3,788
Interpretation of log transformed predictor and/or response
Charlie provides a nice, correct explanation. The Statistical Computing site at UCLA has some further examples: https://stats.oarc.ucla.edu/sas/faq/how-can-i-interpret-log-transformed-variables-in-terms-of-percent-change-in-linear-regression, and https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faqhow-do-i-interpret-a-regression-model-when-some-variables-are-log-transformed Just to complement Charlie's answer, below are specific interpretations of your examples. As always, coefficient interpretations assume that you can defend your model, that the regression diagnostics are satisfactory, and that the data are from a valid study. Example A: No transformations DV = Intercept + B1 * IV + Error "One unit increase in IV is associated with a (B1) unit increase in DV." Example B: Outcome transformed log(DV) = Intercept + B1 * IV + Error "One unit increase in IV is associated with a (B1 * 100) percent increase in DV." Example C: Exposure transformed DV = Intercept + B1 * log(IV) + Error "One percent increase in IV is associated with a (B1 / 100) unit increase in DV." Example D: Outcome transformed and exposure transformed log(DV) = Intercept + B1 * log(IV) + Error "One percent increase in IV is associated with a (B1) percent increase in DV."
Interpretation of log transformed predictor and/or response
Charlie provides a nice, correct explanation. The Statistical Computing site at UCLA has some further examples: https://stats.oarc.ucla.edu/sas/faq/how-can-i-interpret-log-transformed-variables-in-te
Interpretation of log transformed predictor and/or response Charlie provides a nice, correct explanation. The Statistical Computing site at UCLA has some further examples: https://stats.oarc.ucla.edu/sas/faq/how-can-i-interpret-log-transformed-variables-in-terms-of-percent-change-in-linear-regression, and https://stats.oarc.ucla.edu/other/mult-pkg/faq/general/faqhow-do-i-interpret-a-regression-model-when-some-variables-are-log-transformed Just to complement Charlie's answer, below are specific interpretations of your examples. As always, coefficient interpretations assume that you can defend your model, that the regression diagnostics are satisfactory, and that the data are from a valid study. Example A: No transformations DV = Intercept + B1 * IV + Error "One unit increase in IV is associated with a (B1) unit increase in DV." Example B: Outcome transformed log(DV) = Intercept + B1 * IV + Error "One unit increase in IV is associated with a (B1 * 100) percent increase in DV." Example C: Exposure transformed DV = Intercept + B1 * log(IV) + Error "One percent increase in IV is associated with a (B1 / 100) unit increase in DV." Example D: Outcome transformed and exposure transformed log(DV) = Intercept + B1 * log(IV) + Error "One percent increase in IV is associated with a (B1) percent increase in DV."
Interpretation of log transformed predictor and/or response Charlie provides a nice, correct explanation. The Statistical Computing site at UCLA has some further examples: https://stats.oarc.ucla.edu/sas/faq/how-can-i-interpret-log-transformed-variables-in-te
3,789
Interpretation of log transformed predictor and/or response
In the log-log- model, see that $$\begin{equation*}\beta_1 = \frac{\partial \log(y)}{\partial \log(x)}.\end{equation*}$$ Recall that $$\begin{equation*} \frac{\partial \log(y)}{\partial y} = \frac{1}{y} \end{equation*}$$ or $$\begin{equation*} \partial \log(y) = \frac{\partial y}{y}. \end{equation*}$$ Multiplying this latter formulation by 100 gives the percent change in $y$. We have analogous results for $x$. Using this fact, we can interpret $\beta_1$ as the percent change in $y$ for a 1 percent change in $x$. Following the same logic, for the level-log model, we have $$\begin{equation*}\beta_1 = \frac{\partial y}{\partial \log(x)} = 100 \frac{\partial y}{100 \times \partial \log(x)}.\end{equation*}$$ or $\beta_1/100$ is the unit change in $y$ for a one percent change in $x$.
Interpretation of log transformed predictor and/or response
In the log-log- model, see that $$\begin{equation*}\beta_1 = \frac{\partial \log(y)}{\partial \log(x)}.\end{equation*}$$ Recall that $$\begin{equation*} \frac{\partial \log(y)}{\partial y} = \frac{1}{
Interpretation of log transformed predictor and/or response In the log-log- model, see that $$\begin{equation*}\beta_1 = \frac{\partial \log(y)}{\partial \log(x)}.\end{equation*}$$ Recall that $$\begin{equation*} \frac{\partial \log(y)}{\partial y} = \frac{1}{y} \end{equation*}$$ or $$\begin{equation*} \partial \log(y) = \frac{\partial y}{y}. \end{equation*}$$ Multiplying this latter formulation by 100 gives the percent change in $y$. We have analogous results for $x$. Using this fact, we can interpret $\beta_1$ as the percent change in $y$ for a 1 percent change in $x$. Following the same logic, for the level-log model, we have $$\begin{equation*}\beta_1 = \frac{\partial y}{\partial \log(x)} = 100 \frac{\partial y}{100 \times \partial \log(x)}.\end{equation*}$$ or $\beta_1/100$ is the unit change in $y$ for a one percent change in $x$.
Interpretation of log transformed predictor and/or response In the log-log- model, see that $$\begin{equation*}\beta_1 = \frac{\partial \log(y)}{\partial \log(x)}.\end{equation*}$$ Recall that $$\begin{equation*} \frac{\partial \log(y)}{\partial y} = \frac{1}{
3,790
Interpretation of log transformed predictor and/or response
The main purpose of linear regression is to estimate a mean difference of outcomes comparing adjacent levels of a regressor. There are many types of means. We are most familiar with the arithmetic mean. $$AM(X) = \frac{\left( X_1 + X_2 + \ldots + X_n \right)}{n}$$ The AM is what is estimated using OLS and untransformed variables. The geometric mean is different: $$GM(X) = \sqrt[\LARGE{n}]{\left( X_1 \times X_2 \times \ldots \times X_n \right)} = \exp(AM(\log(X))$$ Practically a GM difference is a multiplicative difference: you pay X% of a premium in interest when assuming a loan, your hemoglobin levels decrease X% after starting metformin, the failure rate of springs increase X% as a fraction of the width. In all of these instances, a raw mean difference makes less sense. Log transforming estimates a geometric mean difference. If you log transform an outcome and model it in a linear regression using the following formula specification: log(y) ~ x, the coefficient $\beta_1$ is a mean difference of the log outcome comparing adjacent units of $X$. This is practically useless, so we exponentiate the parameter $e^{\beta_1}$ and interpret this value as a geometric mean difference. For instance, in a study of HIV viral load following 10 weeks administration of ART, we might estimate prepost geometric mean of $e^{\beta_1} = 0.40$. That means whatever the viral load was at baseline, it was on average 60% lower or had a 0.6 fold decrease at follow-up. If the load was 10,000 at baseline, my model would predict it to be 4,000 at follow-up, if it were 1,000 at baseline, my model would predict it to be 400 at follow-up (a smaller difference on the raw scale, but proportionally the same). This is an important distinction from other answers: The convention of multiplying the log-scale coefficient by 100 comes from the approximation $\log(x) \approx 1-x$ when $1-x$ is small. If the coefficient (on the log scale) is say 0.05, then $\exp(0.05) \approx 1.05$ and the interpretation is: a 5% "increase" in the outcome for a 1 unit "increase" in $X$. However, if the coefficient is 0.5 then $\exp(0.5) = 1.65$ and we interpret this as a 65% "increase" in $Y$ for a 1 unit "increase" in $X$. It is NOT a 50% increase. Suppose we log transform a predictor: y ~ log(x, base=2). Here, I am interested in a multiplicative change in $x$ rather than a raw difference. I now am interested in comparing participants differing by 2 fold in $X$. Suppose for instance, I am interested in measuring infection (yes/no) following exposure to blood-borne pathogen at various concentrations using an additive risk model. The biologic model may suggest that risk increases proportionately for every doubling of concentration. Then, I do not transform my outcome, but the estimated $\beta_1$ coefficient is interpreted as a risk difference comparing groups exposed at two-fold concentration differences of infectious material. Lastly, the log(y) ~ log(x) simply applies both definitions to obtain a multiplicative difference comparing groups differing multiplicatively in exposure levels.
Interpretation of log transformed predictor and/or response
The main purpose of linear regression is to estimate a mean difference of outcomes comparing adjacent levels of a regressor. There are many types of means. We are most familiar with the arithmetic mea
Interpretation of log transformed predictor and/or response The main purpose of linear regression is to estimate a mean difference of outcomes comparing adjacent levels of a regressor. There are many types of means. We are most familiar with the arithmetic mean. $$AM(X) = \frac{\left( X_1 + X_2 + \ldots + X_n \right)}{n}$$ The AM is what is estimated using OLS and untransformed variables. The geometric mean is different: $$GM(X) = \sqrt[\LARGE{n}]{\left( X_1 \times X_2 \times \ldots \times X_n \right)} = \exp(AM(\log(X))$$ Practically a GM difference is a multiplicative difference: you pay X% of a premium in interest when assuming a loan, your hemoglobin levels decrease X% after starting metformin, the failure rate of springs increase X% as a fraction of the width. In all of these instances, a raw mean difference makes less sense. Log transforming estimates a geometric mean difference. If you log transform an outcome and model it in a linear regression using the following formula specification: log(y) ~ x, the coefficient $\beta_1$ is a mean difference of the log outcome comparing adjacent units of $X$. This is practically useless, so we exponentiate the parameter $e^{\beta_1}$ and interpret this value as a geometric mean difference. For instance, in a study of HIV viral load following 10 weeks administration of ART, we might estimate prepost geometric mean of $e^{\beta_1} = 0.40$. That means whatever the viral load was at baseline, it was on average 60% lower or had a 0.6 fold decrease at follow-up. If the load was 10,000 at baseline, my model would predict it to be 4,000 at follow-up, if it were 1,000 at baseline, my model would predict it to be 400 at follow-up (a smaller difference on the raw scale, but proportionally the same). This is an important distinction from other answers: The convention of multiplying the log-scale coefficient by 100 comes from the approximation $\log(x) \approx 1-x$ when $1-x$ is small. If the coefficient (on the log scale) is say 0.05, then $\exp(0.05) \approx 1.05$ and the interpretation is: a 5% "increase" in the outcome for a 1 unit "increase" in $X$. However, if the coefficient is 0.5 then $\exp(0.5) = 1.65$ and we interpret this as a 65% "increase" in $Y$ for a 1 unit "increase" in $X$. It is NOT a 50% increase. Suppose we log transform a predictor: y ~ log(x, base=2). Here, I am interested in a multiplicative change in $x$ rather than a raw difference. I now am interested in comparing participants differing by 2 fold in $X$. Suppose for instance, I am interested in measuring infection (yes/no) following exposure to blood-borne pathogen at various concentrations using an additive risk model. The biologic model may suggest that risk increases proportionately for every doubling of concentration. Then, I do not transform my outcome, but the estimated $\beta_1$ coefficient is interpreted as a risk difference comparing groups exposed at two-fold concentration differences of infectious material. Lastly, the log(y) ~ log(x) simply applies both definitions to obtain a multiplicative difference comparing groups differing multiplicatively in exposure levels.
Interpretation of log transformed predictor and/or response The main purpose of linear regression is to estimate a mean difference of outcomes comparing adjacent levels of a regressor. There are many types of means. We are most familiar with the arithmetic mea
3,791
Two-tailed tests... I'm just not convinced. What's the point?
Think of the data as the tip of the iceberg – all you can see above the water is the tip of the iceberg but in reality you are interested in learning something about the entire iceberg. Statisticians, data scientists and others working with data are careful to not let what they see above the water line influence and bias their assessment of what's hidden below the water line. For this reason, in a hypothesis testing situation, they tend to formulate their null and alternative hypotheses before they see the tip of the iceberg, based on their expectations (or lack thereof) of what might happen if they could view the iceberg in its entirety. Looking at the data to formulate your hypotheses is a poor practice and should be avoided – it's like putting the cart before the horse. Recall that the data come from a single sample selected (hopefully using a random selection mechanism) from the target population/universe of interest. The sample has its own idiosyncracies, which may or may not be reflective of the underlying population. Why would you want your hypotheses to reflect a narrow slice of the population instead of the entire population? Another way to think about this is that, every time you select a sample from your target population (using a random selection mechanism), the sample will yield different data. If you use the data (which you shouldn't!!!) to guide your specification of the null and alternative hypotheses, your hypotheses will be all over the map, essentially driven by the idiosyncratic features of each sample. Of course, in practice we only draw one sample, but it would be a very disquieting thought to know that if someone else performed the same study with a different sample of the same size, they would have to change their hypotheses to reflect the realities of their sample. One of my graduate school professors used to have a very wise saying: "We don't care about the sample, except that it tells us something about the population". We want to formulate our hypotheses to learn something about the target population, not about the one sample we happened to select from that population.
Two-tailed tests... I'm just not convinced. What's the point?
Think of the data as the tip of the iceberg – all you can see above the water is the tip of the iceberg but in reality you are interested in learning something about the entire iceberg. Statisticians,
Two-tailed tests... I'm just not convinced. What's the point? Think of the data as the tip of the iceberg – all you can see above the water is the tip of the iceberg but in reality you are interested in learning something about the entire iceberg. Statisticians, data scientists and others working with data are careful to not let what they see above the water line influence and bias their assessment of what's hidden below the water line. For this reason, in a hypothesis testing situation, they tend to formulate their null and alternative hypotheses before they see the tip of the iceberg, based on their expectations (or lack thereof) of what might happen if they could view the iceberg in its entirety. Looking at the data to formulate your hypotheses is a poor practice and should be avoided – it's like putting the cart before the horse. Recall that the data come from a single sample selected (hopefully using a random selection mechanism) from the target population/universe of interest. The sample has its own idiosyncracies, which may or may not be reflective of the underlying population. Why would you want your hypotheses to reflect a narrow slice of the population instead of the entire population? Another way to think about this is that, every time you select a sample from your target population (using a random selection mechanism), the sample will yield different data. If you use the data (which you shouldn't!!!) to guide your specification of the null and alternative hypotheses, your hypotheses will be all over the map, essentially driven by the idiosyncratic features of each sample. Of course, in practice we only draw one sample, but it would be a very disquieting thought to know that if someone else performed the same study with a different sample of the same size, they would have to change their hypotheses to reflect the realities of their sample. One of my graduate school professors used to have a very wise saying: "We don't care about the sample, except that it tells us something about the population". We want to formulate our hypotheses to learn something about the target population, not about the one sample we happened to select from that population.
Two-tailed tests... I'm just not convinced. What's the point? Think of the data as the tip of the iceberg – all you can see above the water is the tip of the iceberg but in reality you are interested in learning something about the entire iceberg. Statisticians,
3,792
Two-tailed tests... I'm just not convinced. What's the point?
I think when considering your question it helps if you try to keep the goal/selling points of null-hypothesis significance testing (NHST) in mind; it's just one paradigm (albeit a very popular one) for statistical inference, and the others have their own strengths as well (e.g., see here for a discussion of NHST relative to Bayesian inference). What's the big perk of NHST?: Long-run error control. If you follow the rules of NHST (and sometimes that is a very big if), then you should have a good sense of how likely you are to be wrong with the inferences you make, in the long run. One of the persnickety rules of NHST is that, without further alteration to your testing procedure, you only get to take one look at your test of interest. Researchers in practice often ignore (or are not aware of) this rule (see Simmons et al., 2012), conducting multiple tests after adding waves of data, checking their $p$-values after adding/removing variables to their models, etc. The problem with this is that researchers are rarely neutral with respect to outcome of NHST; they are keenly aware that significant results are more likely to be published than are non-significant results (for reasons that are both misguided and legitimate; Rosenthal, 1979). Researchers are therefore often motivated to add data/amend models/select outliers and repeatedly test until they "uncover" a significant effect (see John et al., 2011, a good introduction). A counterintuitive problem is created by the above practices, described nicely in Dienes (2008): if researchers will keep adjusting their sample/design/models until significance is achieved, then their desired long-run error rates of false-positive findings (often $\alpha =.05$) and false-negative findings (often $\beta =.20$) will each approach 1.0 and 0.0, respectively (i.e., you will always reject $H_0$, both when it's false and when it's true). In the context of your specific questions, researchers use two-tailed tests as a default when they don't want to make particular predictions with respect to the direction of the effect. If they are wrong in their guess, and run a one-tailed test in the direction of the effect, their long-run $\alpha$ will be inflated. If they look at descriptive statistics and run a one-tailed test based on their eyeballing of the trend, their long-run $\alpha$ will be inflated. You might think this isn't a huge problem, in practice, that the $p$-values lose their long-run meaning, but if they don't retain their meaning, it begs the question of why you are using an approach to inference that prioritizes long-run error control. Lastly (and as a matter of personal preference), I would have less of a problem if you first conducted a two-tailed test, found it non-significant, then did the one-tailed test in the direction the first test implied, and found it to be significant if (and only if) you performed a strict confirmatory replication of that effect in another sample, and published the replication in the same paper. Exploratory data analysis--with error-rate inflating flexible analysis practice--is fine, as long as you are able to replicate your effect in a new sample without that same analytic flexibility. References Dienes, Z. (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. Palgrave Macmillan. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological science, 23(5), 524-532. Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological bulletin, 86(3), 638. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366.
Two-tailed tests... I'm just not convinced. What's the point?
I think when considering your question it helps if you try to keep the goal/selling points of null-hypothesis significance testing (NHST) in mind; it's just one paradigm (albeit a very popular one) fo
Two-tailed tests... I'm just not convinced. What's the point? I think when considering your question it helps if you try to keep the goal/selling points of null-hypothesis significance testing (NHST) in mind; it's just one paradigm (albeit a very popular one) for statistical inference, and the others have their own strengths as well (e.g., see here for a discussion of NHST relative to Bayesian inference). What's the big perk of NHST?: Long-run error control. If you follow the rules of NHST (and sometimes that is a very big if), then you should have a good sense of how likely you are to be wrong with the inferences you make, in the long run. One of the persnickety rules of NHST is that, without further alteration to your testing procedure, you only get to take one look at your test of interest. Researchers in practice often ignore (or are not aware of) this rule (see Simmons et al., 2012), conducting multiple tests after adding waves of data, checking their $p$-values after adding/removing variables to their models, etc. The problem with this is that researchers are rarely neutral with respect to outcome of NHST; they are keenly aware that significant results are more likely to be published than are non-significant results (for reasons that are both misguided and legitimate; Rosenthal, 1979). Researchers are therefore often motivated to add data/amend models/select outliers and repeatedly test until they "uncover" a significant effect (see John et al., 2011, a good introduction). A counterintuitive problem is created by the above practices, described nicely in Dienes (2008): if researchers will keep adjusting their sample/design/models until significance is achieved, then their desired long-run error rates of false-positive findings (often $\alpha =.05$) and false-negative findings (often $\beta =.20$) will each approach 1.0 and 0.0, respectively (i.e., you will always reject $H_0$, both when it's false and when it's true). In the context of your specific questions, researchers use two-tailed tests as a default when they don't want to make particular predictions with respect to the direction of the effect. If they are wrong in their guess, and run a one-tailed test in the direction of the effect, their long-run $\alpha$ will be inflated. If they look at descriptive statistics and run a one-tailed test based on their eyeballing of the trend, their long-run $\alpha$ will be inflated. You might think this isn't a huge problem, in practice, that the $p$-values lose their long-run meaning, but if they don't retain their meaning, it begs the question of why you are using an approach to inference that prioritizes long-run error control. Lastly (and as a matter of personal preference), I would have less of a problem if you first conducted a two-tailed test, found it non-significant, then did the one-tailed test in the direction the first test implied, and found it to be significant if (and only if) you performed a strict confirmatory replication of that effect in another sample, and published the replication in the same paper. Exploratory data analysis--with error-rate inflating flexible analysis practice--is fine, as long as you are able to replicate your effect in a new sample without that same analytic flexibility. References Dienes, Z. (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. Palgrave Macmillan. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological science, 23(5), 524-532. Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological bulletin, 86(3), 638. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366.
Two-tailed tests... I'm just not convinced. What's the point? I think when considering your question it helps if you try to keep the goal/selling points of null-hypothesis significance testing (NHST) in mind; it's just one paradigm (albeit a very popular one) fo
3,793
Two-tailed tests... I'm just not convinced. What's the point?
Unfortunately, the motivating example of drug development is not a good one as it's not what we do to develop drugs. We use different, more stringent rules to stop the study if trends are on the side of harm. This is for the safety of the patients and also because the drug is unlikely to magically swing in the direction of a meaningful benefit. So why do two tailed tests? (when in most cases we have some a priori notion of the possible direction of effect we're trying to model) The null hypothesis should bear some resemblance to belief in the sense of being plausible, informed, and justified. In most cases, people agree an "uninteresting result" is when there is 0 effect, whereas a negative or a positive effect is of equal interest. It is very hard to articulate a composite null hypothesis, e.g. the case where we know the statistic could be equal to or less than a certain amount. One must be very explicit about a null hypothesis to make sense of their scientific findings. It's worth pointing out that the manner in which one conducts a composite hypothesis test is that the statistic under the null hypothesis assumes the most consistent value within the range of the observed data. So if the effect is in the positive direction as expected, the null value is taken to be 0 anyway, and we've mooted needlessly. A two tailed test amounts to conducting two one-sided tests with control for multiple comparisons! The two tailed test is actually partly valued because it ends up being more conservative in the long run. When we have good belief about the direction of effect, the two tailed tests will yield false positives half as often with very little overall effect on power. In the case of evaluating a treatment in a randomized controlled trial, if you tried to sell me a one-sided test, I would stop you to ask, "Well wait, why would we believe the treatment is actually harmful? Is there actually evidence to support this? Is there even equipoise [an ability to demonstrate a beneficial effect]?" The logical inconsistency behind the one-sided test calls the whole research into question. If truly nothing is known, any value other than 0 is considered interesting and the two tailed test is not just a good idea, it's necessary.
Two-tailed tests... I'm just not convinced. What's the point?
Unfortunately, the motivating example of drug development is not a good one as it's not what we do to develop drugs. We use different, more stringent rules to stop the study if trends are on the side
Two-tailed tests... I'm just not convinced. What's the point? Unfortunately, the motivating example of drug development is not a good one as it's not what we do to develop drugs. We use different, more stringent rules to stop the study if trends are on the side of harm. This is for the safety of the patients and also because the drug is unlikely to magically swing in the direction of a meaningful benefit. So why do two tailed tests? (when in most cases we have some a priori notion of the possible direction of effect we're trying to model) The null hypothesis should bear some resemblance to belief in the sense of being plausible, informed, and justified. In most cases, people agree an "uninteresting result" is when there is 0 effect, whereas a negative or a positive effect is of equal interest. It is very hard to articulate a composite null hypothesis, e.g. the case where we know the statistic could be equal to or less than a certain amount. One must be very explicit about a null hypothesis to make sense of their scientific findings. It's worth pointing out that the manner in which one conducts a composite hypothesis test is that the statistic under the null hypothesis assumes the most consistent value within the range of the observed data. So if the effect is in the positive direction as expected, the null value is taken to be 0 anyway, and we've mooted needlessly. A two tailed test amounts to conducting two one-sided tests with control for multiple comparisons! The two tailed test is actually partly valued because it ends up being more conservative in the long run. When we have good belief about the direction of effect, the two tailed tests will yield false positives half as often with very little overall effect on power. In the case of evaluating a treatment in a randomized controlled trial, if you tried to sell me a one-sided test, I would stop you to ask, "Well wait, why would we believe the treatment is actually harmful? Is there actually evidence to support this? Is there even equipoise [an ability to demonstrate a beneficial effect]?" The logical inconsistency behind the one-sided test calls the whole research into question. If truly nothing is known, any value other than 0 is considered interesting and the two tailed test is not just a good idea, it's necessary.
Two-tailed tests... I'm just not convinced. What's the point? Unfortunately, the motivating example of drug development is not a good one as it's not what we do to develop drugs. We use different, more stringent rules to stop the study if trends are on the side
3,794
Two-tailed tests... I'm just not convinced. What's the point?
One way to approach it is to temporarily forget about hypothesis testing and think about confidence intervals instead. One-sided tests correspond to one-sided confidence intervals and two-sided tests correspond to two-sided confidence intervals. Suppose that you want to estimate the mean of a population. Naturally, you take a sample and compute a sample mean. There is no reason to take a point-estimate at face value, so you express your answer in terms of an interval that you are reasonably confident contains the true mean. What type of interval do you choose? A two-sided interval is by far the more natural choice. A one-sided interval only makes sense when you simply don't care about finding either an upper bound or a lower bound of your estimate (because you believe that you already know a useful bound in one direction). How often are you really that sure about the situation? Perhaps switching the question to confidence intervals doesn't really nail it down, but it is methodologically inconsistent to prefer one-tailed tests but two-sided confidence intervals.
Two-tailed tests... I'm just not convinced. What's the point?
One way to approach it is to temporarily forget about hypothesis testing and think about confidence intervals instead. One-sided tests correspond to one-sided confidence intervals and two-sided tests
Two-tailed tests... I'm just not convinced. What's the point? One way to approach it is to temporarily forget about hypothesis testing and think about confidence intervals instead. One-sided tests correspond to one-sided confidence intervals and two-sided tests correspond to two-sided confidence intervals. Suppose that you want to estimate the mean of a population. Naturally, you take a sample and compute a sample mean. There is no reason to take a point-estimate at face value, so you express your answer in terms of an interval that you are reasonably confident contains the true mean. What type of interval do you choose? A two-sided interval is by far the more natural choice. A one-sided interval only makes sense when you simply don't care about finding either an upper bound or a lower bound of your estimate (because you believe that you already know a useful bound in one direction). How often are you really that sure about the situation? Perhaps switching the question to confidence intervals doesn't really nail it down, but it is methodologically inconsistent to prefer one-tailed tests but two-sided confidence intervals.
Two-tailed tests... I'm just not convinced. What's the point? One way to approach it is to temporarily forget about hypothesis testing and think about confidence intervals instead. One-sided tests correspond to one-sided confidence intervals and two-sided tests
3,795
Two-tailed tests... I'm just not convinced. What's the point?
After learning the absolute basics of hypothesis testing and getting to the part about one vs two tailed tests... I understand the basic math and increased detection ability of one tailed tests, etc... But I just can't wrap around my head around one thing... What's the point? I'm really failing to understand why you should split your alpha between the two extremes when your is sample result can only be in one or the other, or neither. The problem is that you don't know the population mean. I have never encountered a real world scenario that I know the true population mean. Take the example scenario from the quoted text above. How could you possibly "fail to test" for a result in the opposite direction? You have your sample mean. You have your population mean. Simple arithmetic tells you which is higher. What is there to test, or fail to test, in the opposite direction? What's stopping you just starting from scratch with the opposite hypothesis if you clearly see that the sample mean is way off in the other direction? I read your paragraph several times, but I'm still not sure about your arguments. Do you want to rephrase it? You fail to "test" if your data doesn't land you in your chosen critical regions. I assume this also applies to switching the polarity of your one-tailed test. But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? The quote is correct because hacking a p-value is inappropriate. How much do we know about p-hacking "in the wild"? has more details. Clearly I am missing a big part of the picture here. It all just seems too arbitrary. Which it is, I guess, in the sense that what denotes "statistically significant" - 95%, 99%, 99.9%... Is arbitrary to begin with. Help? It is arbitrary. That's why data scientists generally report the magnitude of the p-value itself (not just significant or insignificant), and also the effects size.
Two-tailed tests... I'm just not convinced. What's the point?
After learning the absolute basics of hypothesis testing and getting to the part about one vs two tailed tests... I understand the basic math and increased detection ability of one tailed tests, e
Two-tailed tests... I'm just not convinced. What's the point? After learning the absolute basics of hypothesis testing and getting to the part about one vs two tailed tests... I understand the basic math and increased detection ability of one tailed tests, etc... But I just can't wrap around my head around one thing... What's the point? I'm really failing to understand why you should split your alpha between the two extremes when your is sample result can only be in one or the other, or neither. The problem is that you don't know the population mean. I have never encountered a real world scenario that I know the true population mean. Take the example scenario from the quoted text above. How could you possibly "fail to test" for a result in the opposite direction? You have your sample mean. You have your population mean. Simple arithmetic tells you which is higher. What is there to test, or fail to test, in the opposite direction? What's stopping you just starting from scratch with the opposite hypothesis if you clearly see that the sample mean is way off in the other direction? I read your paragraph several times, but I'm still not sure about your arguments. Do you want to rephrase it? You fail to "test" if your data doesn't land you in your chosen critical regions. I assume this also applies to switching the polarity of your one-tailed test. But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? The quote is correct because hacking a p-value is inappropriate. How much do we know about p-hacking "in the wild"? has more details. Clearly I am missing a big part of the picture here. It all just seems too arbitrary. Which it is, I guess, in the sense that what denotes "statistically significant" - 95%, 99%, 99.9%... Is arbitrary to begin with. Help? It is arbitrary. That's why data scientists generally report the magnitude of the p-value itself (not just significant or insignificant), and also the effects size.
Two-tailed tests... I'm just not convinced. What's the point? After learning the absolute basics of hypothesis testing and getting to the part about one vs two tailed tests... I understand the basic math and increased detection ability of one tailed tests, e
3,796
Two-tailed tests... I'm just not convinced. What's the point?
Well, all difference relies in the question you want to answer. If the question is: "Is one group of values bigger than the other?" you can use a one tailed test. To answer the question: "Are these groups of values different?" you use the two tailed test. Take into consideration that a set of data may be statistically higher than another, but not statistically different... and that's statistics.
Two-tailed tests... I'm just not convinced. What's the point?
Well, all difference relies in the question you want to answer. If the question is: "Is one group of values bigger than the other?" you can use a one tailed test. To answer the question: "Are these gr
Two-tailed tests... I'm just not convinced. What's the point? Well, all difference relies in the question you want to answer. If the question is: "Is one group of values bigger than the other?" you can use a one tailed test. To answer the question: "Are these groups of values different?" you use the two tailed test. Take into consideration that a set of data may be statistically higher than another, but not statistically different... and that's statistics.
Two-tailed tests... I'm just not convinced. What's the point? Well, all difference relies in the question you want to answer. If the question is: "Is one group of values bigger than the other?" you can use a one tailed test. To answer the question: "Are these gr
3,797
Two-tailed tests... I'm just not convinced. What's the point?
But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? The alpha value is the probability that you will reject the null, given that the null is true. Suppose your null is that the sample mean is normally distributed with mean zero. If P(sample mean>1|H0) = .05, then the rule "Collect a sample, and reject the null if the sample mean is greater than 1" has a probability, given that the null is true, of 5% of rejecting the null. The rule "Collect a sample, and if the sample mean is positive, then reject the null if the sample mean is greater than 1, and if the sample mean is negative, reject the null if the sample mean is less than 1" has a probability, given that the null is true, of 10% of rejecting the null. So the first rule has an alpha of 5%, and the second rule has an alpha of 10%. If you start out with a two-tailed test, and then change it to a one-tailed test based on the data, then you're following the second rule, so it would be inaccurate to report your alpha as 5%. The alpha value depends not only on what the data is, but what rules you are following in analyzing it. If you're asking why use a metric that has this property, rather than something that depends only on the data, that is a more complicated question.
Two-tailed tests... I'm just not convinced. What's the point?
But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? The alpha value is the probability that you will reject the null, given
Two-tailed tests... I'm just not convinced. What's the point? But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? The alpha value is the probability that you will reject the null, given that the null is true. Suppose your null is that the sample mean is normally distributed with mean zero. If P(sample mean>1|H0) = .05, then the rule "Collect a sample, and reject the null if the sample mean is greater than 1" has a probability, given that the null is true, of 5% of rejecting the null. The rule "Collect a sample, and if the sample mean is positive, then reject the null if the sample mean is greater than 1, and if the sample mean is negative, reject the null if the sample mean is less than 1" has a probability, given that the null is true, of 10% of rejecting the null. So the first rule has an alpha of 5%, and the second rule has an alpha of 10%. If you start out with a two-tailed test, and then change it to a one-tailed test based on the data, then you're following the second rule, so it would be inaccurate to report your alpha as 5%. The alpha value depends not only on what the data is, but what rules you are following in analyzing it. If you're asking why use a metric that has this property, rather than something that depends only on the data, that is a more complicated question.
Two-tailed tests... I'm just not convinced. What's the point? But how is this "doctored" result any less valid than if you had simply chosen the correct one-tailed test in the first place? The alpha value is the probability that you will reject the null, given
3,798
Two-tailed tests... I'm just not convinced. What's the point?
Regarding the 2nd point Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was. we have that, if the null is true, the first, two-tailed, test falsely rejects with probability $\alpha$, but the one-sided may also reject in the second stage. The overall rejection probability will hence exceed $\alpha$, and you are not testing at the level you believe to be testing anymore - you more often get false rejections than in $\alpha\cdot 100\%$ of the cases in which the strategy is applied to true null hypotheses. Overall, we seek $$ P(\text{two-sided rejects or one-sided does, but two sided doesn't}) $$ which we may express as $$ P(\text{two-sided rejects} \cup \text{(one-sided does} \cap \text{two sided doesn't)}) $$ The two events in the union are disjoint, so that we are after $$ P(\text{two-sided rejects}) +P(\text{one-sided does} \cap \text{two sided doesn't}) $$ For the second term, there is probability mass $\alpha/2$ between the upper $1-\alpha$ and $1-\alpha/2$ quantiles (i.e., the rejection points of the one-sided and two-sided tests), which is the joint probability of the two-sided test not rejecting but the one-sided doing so. Hence, $$P(\text{one-sided does} \cap \text{two sided doesn't})=\alpha/2 $$ so that the overall rejection probability of this strategy is $$\alpha+\frac{\alpha}{2}>\alpha$$ Effectively, we just add up the probabilities that the test statistic lands to the left of the $\alpha/2$-quantile, between the upper $1-\alpha$ and $1-\alpha/2$ quantiles or to the right of the $1-\alpha/2$-quantile. Here is a little numerical illustration: n <- 100 alpha <- 0.05 two.sided <- function (x, alpha=0.05) (sqrt(n)*abs(mean(x)) > qnorm(1-alpha/2)) # returns one if two-sided test rejects, 0 else one.sided <- function (x, alpha=0.05) (sqrt(n)*mean(x) > qnorm(1-alpha)) # returns one if one-sided test rejects, 0 else reps <- 1e8 two.step <- rep(NA,reps) for (i in 1:reps){ x <- rnorm(n) # generate data from a N(0,1) distribution, so that the test statistic sqrt(n)*mean(x) is also N(0,1) under H_0: mu=0 two.step[i] <- ifelse(two.sided(x)==0, one.sided(x), 1) # first conducts two-sided test, then one-sided if two-sided fails to reject } > mean(two.step) [1] 0.07505351
Two-tailed tests... I'm just not convinced. What's the point?
Regarding the 2nd point Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed te
Two-tailed tests... I'm just not convinced. What's the point? Regarding the 2nd point Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was. we have that, if the null is true, the first, two-tailed, test falsely rejects with probability $\alpha$, but the one-sided may also reject in the second stage. The overall rejection probability will hence exceed $\alpha$, and you are not testing at the level you believe to be testing anymore - you more often get false rejections than in $\alpha\cdot 100\%$ of the cases in which the strategy is applied to true null hypotheses. Overall, we seek $$ P(\text{two-sided rejects or one-sided does, but two sided doesn't}) $$ which we may express as $$ P(\text{two-sided rejects} \cup \text{(one-sided does} \cap \text{two sided doesn't)}) $$ The two events in the union are disjoint, so that we are after $$ P(\text{two-sided rejects}) +P(\text{one-sided does} \cap \text{two sided doesn't}) $$ For the second term, there is probability mass $\alpha/2$ between the upper $1-\alpha$ and $1-\alpha/2$ quantiles (i.e., the rejection points of the one-sided and two-sided tests), which is the joint probability of the two-sided test not rejecting but the one-sided doing so. Hence, $$P(\text{one-sided does} \cap \text{two sided doesn't})=\alpha/2 $$ so that the overall rejection probability of this strategy is $$\alpha+\frac{\alpha}{2}>\alpha$$ Effectively, we just add up the probabilities that the test statistic lands to the left of the $\alpha/2$-quantile, between the upper $1-\alpha$ and $1-\alpha/2$ quantiles or to the right of the $1-\alpha/2$-quantile. Here is a little numerical illustration: n <- 100 alpha <- 0.05 two.sided <- function (x, alpha=0.05) (sqrt(n)*abs(mean(x)) > qnorm(1-alpha/2)) # returns one if two-sided test rejects, 0 else one.sided <- function (x, alpha=0.05) (sqrt(n)*mean(x) > qnorm(1-alpha)) # returns one if one-sided test rejects, 0 else reps <- 1e8 two.step <- rep(NA,reps) for (i in 1:reps){ x <- rnorm(n) # generate data from a N(0,1) distribution, so that the test statistic sqrt(n)*mean(x) is also N(0,1) under H_0: mu=0 two.step[i] <- ifelse(two.sided(x)==0, one.sided(x), 1) # first conducts two-sided test, then one-sided if two-sided fails to reject } > mean(two.step) [1] 0.07505351
Two-tailed tests... I'm just not convinced. What's the point? Regarding the 2nd point Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed te
3,799
Two-tailed tests... I'm just not convinced. What's the point?
This is just one arbitrary way to look at it: What is a statistical test used for? Probably the most frequent reason to perform a test is because you want to convince people (i. e. editors, reviewers, readers, audience) that your results are "far enough off random" to be noteworthy. And somehow we concluded that $p < \alpha = 0.05$ is the arbitrary, yet universal truth. For any other sensible reason to perform tests, you would never settle for a fixed $\alpha$ of $0.05$, but you would vary your $\alpha$ from case to case, depending on how important the consequences were, that you draw from the test. Back to convincing people, that something is "far enough from just random" to meet a universal criterion of noteworthiness. We have an insensible, yet universally accepted, criterion, that we believe thinks to be "not random" at $\alpha=0.05$ for two-sided testing. An equivalent criterion would be to look at the data, decide which way to test and draw the line at $\alpha=0.025$. The second one is equivalent to the first one, but it is not what we have historically settled with. Once you start to do one-sided tests with $\alpha=0.05$ you make yourself suspicious of undue behaviour, of fishing for significance. Don't do that, if you want to convince people! Then, of course, there is this thing called researchers degree of freedom. You can find significance in any kind of data, if you have sufficient data and are free to test it in as many ways as you wish. This is why you are meant to decide on the test you conduct before looking at the data. Everything else leads to irreproducible test results. I advise to go to youtube and look at Andrew Gelmans talk "Crimes on data for more on that.
Two-tailed tests... I'm just not convinced. What's the point?
This is just one arbitrary way to look at it: What is a statistical test used for? Probably the most frequent reason to perform a test is because you want to convince people (i. e. editors, reviewers,
Two-tailed tests... I'm just not convinced. What's the point? This is just one arbitrary way to look at it: What is a statistical test used for? Probably the most frequent reason to perform a test is because you want to convince people (i. e. editors, reviewers, readers, audience) that your results are "far enough off random" to be noteworthy. And somehow we concluded that $p < \alpha = 0.05$ is the arbitrary, yet universal truth. For any other sensible reason to perform tests, you would never settle for a fixed $\alpha$ of $0.05$, but you would vary your $\alpha$ from case to case, depending on how important the consequences were, that you draw from the test. Back to convincing people, that something is "far enough from just random" to meet a universal criterion of noteworthiness. We have an insensible, yet universally accepted, criterion, that we believe thinks to be "not random" at $\alpha=0.05$ for two-sided testing. An equivalent criterion would be to look at the data, decide which way to test and draw the line at $\alpha=0.025$. The second one is equivalent to the first one, but it is not what we have historically settled with. Once you start to do one-sided tests with $\alpha=0.05$ you make yourself suspicious of undue behaviour, of fishing for significance. Don't do that, if you want to convince people! Then, of course, there is this thing called researchers degree of freedom. You can find significance in any kind of data, if you have sufficient data and are free to test it in as many ways as you wish. This is why you are meant to decide on the test you conduct before looking at the data. Everything else leads to irreproducible test results. I advise to go to youtube and look at Andrew Gelmans talk "Crimes on data for more on that.
Two-tailed tests... I'm just not convinced. What's the point? This is just one arbitrary way to look at it: What is a statistical test used for? Probably the most frequent reason to perform a test is because you want to convince people (i. e. editors, reviewers,
3,800
Two-tailed tests... I'm just not convinced. What's the point?
At first glance, neither of these statements make the assertion that a two-sided test is 'superior' to a one-sided study. There simply needs to be a logical connection from the research hypothesis being tested linked to the statistical inference being tested. For instance: ... consider the consequences of missing an effect in the other direction. Imagine you have developed a new drug that you believe is an improvement over an existing drug. You wish to maximize ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the possibility that the new drug is less effective than the existing drug. First off this is a drug study. So being incorrect in the opposite direction has social significance beyond the framework of statistics. So like many have said health isn't the best to make generalizations. In the quote above, it seems to be about testing a drug when another already exists. So to me, this implies your drug is assumed to be already effective. The statement is in regard to the comparison of two effective drugs thereafter. When comparing these distributions if you're neglecting one side of the population for the sake of improving its comparative results? It’s not only a biased conclusion but the comparison is no longer a valid one to justify: you’re comparing apples to oranges. Similarly, there very well may be point estimates that for the sake of statistical inference made no difference to the conclusion, but are very much of social importance. That's because our sample represents people's lives: something that cannot "re-occur" and is invaluable. Alternatively, the statement implies the researcher has an incentive: "you wish to maximize your ability to detect the improvement..." This notion is non-trivial to the case being isolated as a bad protocol. Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was. Again here it implies the researcher is 'switching' his test: from a two-sided to a one-sided. This is never appropriate. It's imperative to have a research purpose before testing. By always defaulting to the convenience of a two-sided approach -- researchers conveniently fail to more rigorously understand the phenomenon. Here's a paper on this very topic, in fact, making the case that two-sided tests have been overused. It blames the over-use of a two-sided test on the lack of a: clear distinction and a logical linkage between the research hypothesis and its statistical hypothesis It takes the position and stance that researchers: may not be aware of the difference between the two expressive modes or aware of the logical flow in which the research hypothesis should be translated into the statistical hypothesis. A convenience-oriented mixing of the research and statistical hypotheses may be a cause of the overuse of two-tailed testing even in situations where the use of two-tailed testing is inappropriate. what is needed is to grasp the exact statistics in interpreting statistical testing results. Being inexact under the name of being conservative is not recommendable. In that sense, the authors think that merely reporting testing results such as “It was found to be statistically significant at the 0.05 level of significance (i.e., p < 0.05).” is not good enough. Although two-tailed testing is more conservative in theory, it decouples the link between the directional research hypothesis and its statistical hypothesis, possibly leading to doubly inflated p values. The authors have also shown that the argument for finding the significant result in the opposite direction has meaning only in the context of discovery rather than in the context of justification. In the case of testing the research hypothesis and its underlying theory, researchers should not simultaneously address the context of discovery and that of justification. https://www.sciencedirect.com/science/article/pii/S0148296312000550
Two-tailed tests... I'm just not convinced. What's the point?
At first glance, neither of these statements make the assertion that a two-sided test is 'superior' to a one-sided study. There simply needs to be a logical connection from the research hypothesis bei
Two-tailed tests... I'm just not convinced. What's the point? At first glance, neither of these statements make the assertion that a two-sided test is 'superior' to a one-sided study. There simply needs to be a logical connection from the research hypothesis being tested linked to the statistical inference being tested. For instance: ... consider the consequences of missing an effect in the other direction. Imagine you have developed a new drug that you believe is an improvement over an existing drug. You wish to maximize ability to detect the improvement, so you opt for a one-tailed test. In doing so, you fail to test for the possibility that the new drug is less effective than the existing drug. First off this is a drug study. So being incorrect in the opposite direction has social significance beyond the framework of statistics. So like many have said health isn't the best to make generalizations. In the quote above, it seems to be about testing a drug when another already exists. So to me, this implies your drug is assumed to be already effective. The statement is in regard to the comparison of two effective drugs thereafter. When comparing these distributions if you're neglecting one side of the population for the sake of improving its comparative results? It’s not only a biased conclusion but the comparison is no longer a valid one to justify: you’re comparing apples to oranges. Similarly, there very well may be point estimates that for the sake of statistical inference made no difference to the conclusion, but are very much of social importance. That's because our sample represents people's lives: something that cannot "re-occur" and is invaluable. Alternatively, the statement implies the researcher has an incentive: "you wish to maximize your ability to detect the improvement..." This notion is non-trivial to the case being isolated as a bad protocol. Choosing a one-tailed test after running a two-tailed test that failed to reject the null hypothesis is not appropriate, no matter how "close" to significant the two-tailed test was. Again here it implies the researcher is 'switching' his test: from a two-sided to a one-sided. This is never appropriate. It's imperative to have a research purpose before testing. By always defaulting to the convenience of a two-sided approach -- researchers conveniently fail to more rigorously understand the phenomenon. Here's a paper on this very topic, in fact, making the case that two-sided tests have been overused. It blames the over-use of a two-sided test on the lack of a: clear distinction and a logical linkage between the research hypothesis and its statistical hypothesis It takes the position and stance that researchers: may not be aware of the difference between the two expressive modes or aware of the logical flow in which the research hypothesis should be translated into the statistical hypothesis. A convenience-oriented mixing of the research and statistical hypotheses may be a cause of the overuse of two-tailed testing even in situations where the use of two-tailed testing is inappropriate. what is needed is to grasp the exact statistics in interpreting statistical testing results. Being inexact under the name of being conservative is not recommendable. In that sense, the authors think that merely reporting testing results such as “It was found to be statistically significant at the 0.05 level of significance (i.e., p < 0.05).” is not good enough. Although two-tailed testing is more conservative in theory, it decouples the link between the directional research hypothesis and its statistical hypothesis, possibly leading to doubly inflated p values. The authors have also shown that the argument for finding the significant result in the opposite direction has meaning only in the context of discovery rather than in the context of justification. In the case of testing the research hypothesis and its underlying theory, researchers should not simultaneously address the context of discovery and that of justification. https://www.sciencedirect.com/science/article/pii/S0148296312000550
Two-tailed tests... I'm just not convinced. What's the point? At first glance, neither of these statements make the assertion that a two-sided test is 'superior' to a one-sided study. There simply needs to be a logical connection from the research hypothesis bei